DataGuard, which provides GDPR and privacy compliance-as-a-service, raises $20M

Watchdogs have started to raise the issue that new working practices and online activity necessitated by the spread of the coronavirus pandemic are creating new sets of privacy, security and data protection challenges. Today a startup is announcing a growth round of funding to help online businesses navigate those issues better.

DataGuard, a Munich-based startup that provides “GDPR-as-a-service” — essentially a cloud-based platform to help those doing business online ensure that they are compliant with various regional regulations and best practices around privacy by analysing customers’ data processing activities, offering options and suggestions for improving privacy compliance, providing them with the ability to modify their IT infrastructure and internal processes to do so — has raised $20 million, money that it will be using to continue expanding its business across Europe and the Americas and to continue investing in building out its technology.

The funding is coming from a single investor, London’s One Peak, and is the first outside funding for the company. We’re asking but it looks like DataGuard is not disclosing its valuation with this round.

The news is coming at a critical time in the world of venture funding. We are seeing a mix of deals that either were closed or close to closing before the worst of the pandemic reared its ugly head (meaning: some deals are just going to be put on ice, Zoom meeting or not); or are being done specifically to help with business continuity in the wake of all the interruption of normal life (that is, the business is too interesting not to help prop it up); or are closing specifically because the startup has built something that is going to demonstrate just how useful it is in the months to come.

As with the strongest of funding rounds, DataGuard into a couple of those categories.

On one hand, it has demonstrated a demand for its services before any of this hit. Today, the startup provides privacy policy services both to small and medium businesses as well as larger enterprises, and it has picked up 1,000 customers since launching in 2017.

“Millions of companies are striving to comply with privacy regulation such as GDPR or CCPA,” said Thomas Regier, (pictured, left) who co-founded the company with Kivanc Semen (right), in a statement.

“We are excited to partner with One Peak to help many more organizations across the globe become and remain privacy compliant. Our Privacy-as-a-Service solution provides customers with access to a proprietary full-stack platform and services from a dedicated team of data privacy experts. This enables customers to gain insights into their data processing activities and to operationalize privacy and compliance across their entire organization.”

Regier tells us that the company was bootstrapped to 100 employees, which also underscores the company’s capital efficiency, also especially attractive at the moment.

On the other, the wholesale shift to more online and remote working, combined with a giant surge in online traffic caused by more people staying at home to reduce the number of new Covid-19 cases, is driving a lot more traffic and stress testing to websites, apps and other online services.

All that creates precisely the kind of environment where we might, for a period, overlook some of the trickier and more exacting aspects of privacy policies, but which are nonetheless important to keep intact, not least because malicious hackers could take advantage of vulnerable situations, or regulators eventually refocus and come back with heavy fines, or consumers respond with bad PR and more.

“We have a truly horizontal product that has the potential to become an integral part of the tech stack in enterprises and SMBs alike,” said Semen in a statement. “We will use the funding to deliver on our product roadmap. We will achieve this in two ways: By increasing automation levels through improvements of the machine learning capabilities in our privacy software suite and by speeding up our development of new product categories.”

DataGuard is one of a number of startups that have emerged to help businesses navigate the waters of privacy regulations, which are usually not the core competencies of the companies but have become an essential part of how they can (and should) do business online.

Others include OneTrust, which also helps companies provide and run better data protection policies; and InCountry, which is specifically focused on providing services to help companies understand and comply with data protection policies that vary across different regions. OneTrust last year passed a $1 billion valuation, speaking to the huge opportunity and demand in this space.

OnePeak believes that DataGuard’s take on the proposition is one of the more effective and efficient, one reason it’s backed the team. “We are incredibly excited to back DataGuard’s world-class founding team,” says David Klein, Co-Founder and Managing Partner at One Peak, in a statement. “We are convinced that DataGuard’s cutting-edge software suite combined with its comprehensive service offering provides both enterprises and SMBs with an end-to-end solution that fulfils their data privacy needs across the board.”

Instagram launches Co-Watching of posts during video chat

Now you can scroll Instagram together with friends, turning a typically isolating, passive experience into something more social and active. Today Instagram launched Co-Watching, which lets friends on a video chat or group video chat browse through feed posts one user has Liked or Saved, or that Instagram recommends.

Co-Watching could let people ooh, ahh, joke, and talk about Instagram’s content instead of just consuming it solo and maybe posting it to a chat thread so friends can do the same. That could lead to long usage sessions, incentivize users to collect a great depository of Saved posts to share, and spur more video calls that drag people into the app. TechCrunch first reported Instagram was testing Co-Watching a year ago, so we’ll see if it managed to work out the technical and privacy questions of operating the feature.

The launch comes alongside other COVID-19 responses from Instagram that include:

-Showing a shared Instagram Story featuring all the posts from you network that include the “Stay Home” sticker

-Adding Story stickers that remind people to wash their hands or keep their distance from others

-Adding coronavirus educational info to the top of results for related searches

-Removing unofficial COVID-19 accounts from recommendations, as well as virus related content from Explore if it doesn’t come from a credible health organization

-Expanding the donation sticker to more countries so people can search for and ask friends for contributions to relevant non-profits

These updates build on Instagram’s efforts from two weeks ago which included putting COVID-19 prevention tips atop the feed, listing official health organizations atop search results, and demoting the reach of coronavirus-related content rated false by fact checkers.

But Co-Watching will remain a powerful feature long after the quarantines and social distancing end. The ability to co-view content while browsing social networks has already made screensharing app Squad popular. When Squad launched in January 2019, I suggested that “With Facebook and Snap already sniffing around Squad, it’s quite possible they’ll try to copy it.” Facebook tested a Watch Together feature for viewing Facebook Watch videos inside Messenger back in April. And now here we are with Instagram.

The question is whether Squad’s first-mover advantage and option to screenshare from any app will let it hold its own, or if Instagram Co-Watching will just popularize the concept and send users searching for more flexible options like Squad. “Everyone knows that the content flooding our feeds is a filtered version of reality” Squad CEO Esther Crawford told me. “The real and interesting stuff goes down in DMs because people are more authentic when they’re 1:1 or in small group conversations.”

With Co-Watching Instagram users can spill the tea and gossip about posts live and unfiltered over video chat. When people launch a video chat from the Direct inbox or a chat thread, they’ll see a “Posts” button that launches Co-Watching. They’ll be able to pick from their Liked, Saved, or Explore feeds and then reveal it to the video chat, with everyone’s windows lined up beneath the post.

Up to six people can Co-Watch at once on Instagram, consuming feed photos and videos but not IGTV posts. You can share public posts, or private ones that everyone in the chat are allowed to see. If one participant is blocked from viewing a post, it’s inelligible for Co-Watching.

Co-Watching could finally provide an answer to Instagram’s Time Well Spent problem. Research shows how the real danger in social network overuse is passive content consumption like endless solo feed scrolling. It can inspire envy, poor self-esteem, and leave users deflated, especially if the highlights of everyone else’s lives look more interesting than their own day-to-day reality. But active sharing, commenting, and messaging can have a positive effect on well-being, making people feel like they have a stronger support network.

With Co-Watching, Instagram has found a way to turn the one-player experience into a multi-player game. Especially now with everyone stuck at home and unable to crowd around one person’s phone to gab about what they see, there’s a great need for this new feature. One concern is that it could be used for bullying, with people all making fun of someone’s posts.

But in general, the idea of sifting through cute animal photos, dance tutorials, or epic art could take the focus off of the individuals in a video chat. Not having one’s face as the center of attention could make video chat less performative and exhausting. Instead, Co-Watching could let us do apart what we love to do together: just hang out.

One neat plug-in to join a Zoom call from your browser

Want to join a Zoom meeting in the browser without having to download its app to do so? Check out this browser plug-in — which short-cuts the needless friction the videoconferencing company has baked into the process of availing yourself of its web client.

As we noted last week Zoom does have a zero download option — it just hides it really well, preferring to push people to download its app. It’s pretty annoying to say the least. Some have even called it irresponsible, during the coronavirus pandemic, given how many people are suddenly forced to work from home — where they may be using locked down corporate laptops that don’t allow them to download apps.

Software engineer, Arkadiy Tetelman — currently the head of appsec/infrasec for US mobile bank Chimewas one of those who got annoyed by Zoom hiding the join via browser option. So he put together this nice little Zoom Redirector browser extension — that “transparently redirects any meeting links to use Zoom’s browser based web client”, as he puts it on Github.

“When joining a Zoom meeting, the ‘join from your browser’ link is intentionally hidden,” he warns. “This browser extension solves this problem by transparently redirecting any meeting links to use Zoom’s browser based web client.”

So far the extension is available for Chrome and Firefox. At the time of writing submissions are listed as pending for Opera and Edge.

As others have noted, it does remain possible to perform a redirect manually, by adding your meeting ID to a Zoom web client link — zoom.us/wc/join/{your-meeting-id} — though if you’re being asked to join a bunch of Zoom meetings it’s clearly a lot more convenient to have a browser plug-in take the strain for you vs saddling yourself with copypasting meeting IDs. 

While the COVID-19 pandemic has generally fuelled the use of videoconferencing, Zoom appears to be an early beneficiary — with the app enjoying a viral boom (in the digital sense of the term) in recent weeks that’s been great for earnings growth (if not immediately for its share price when it reported its Q4 bounty). And unsurprisingly it’s forecasting a bumper year.

But it’s not all positive vibes or Zoom right now. Another area where the company has faced critical attention in recent days relates to user privacy.

Over the weekend another Twitter user, going by the handle @ouren, posted a critical thread that garnered thousands of likes and retweets — detailing how Zoom can track activity on the user’s computer, including harvesting data on what other programs are running and which window the user has in the foreground.

The thread included a link to an EFF article about the privacy risks of remote working tools, including Zoom.

“The host of a Zoom call has the capacity to monitor the activities of attendees while screen-sharing,” the digital rights group warned. “This functionality is available in Zoom version 4.0 and higher. If attendees of a meeting do not have the Zoom video window in focus during a call where the host is screen-sharing, after 30 seconds the host can see indicators next to each participant’s name indicating that the Zoom window is not active.”

Given the sudden spike in attention around privacy, Zoom chipped into the discussion with an official response, writing that the “attention tracking feature is off by default”.

“Once enabled, hosts can tell if participants have the App open and active when the screen-sharing feature is in use,” it added. “It does not track any aspects of your audio/video or other applications on your window.”

However the company did not explain why it offers such a privacy hostile feature as “attention tracking” in the first place.

What are the rules wrapping privacy during COVID-19?

In a public health emergency that relies on people keeping an anti-social distance from each other to avoid spreading a highly contagious virus for which humans have no pre-existing immunity governments around the world have been quick to look to technology companies for help.

Background tracking is, after all, what many Internet giants’ ad-targeting business models rely on. While, in the US, telcos were recently exposed sharing highly granular location data for commercial ends.

Some of these privacy-hostile practices face ongoing challenges under existing data protection laws in Europe — and/or have at least attracted regulator attention in the US, which lacks a comprehensive digital privacy framework — but a pandemic is clearly an exceptional circumstance. So we’re seeing governments turn to the tech sector for help.

US president Donald Trump was reported last week to have summoned a number of tech companies to the White House to discuss how mobile location data could be used for tracking citizens.

In another development this month he announced Google was working on a nationwide coronavirus screening site — in fact it’s Verily, a different division of Alphabet. But concerns were quickly raised that the site requires users to sign in with a Google account, suggesting users’ health-related queries could be linked to other online activity the tech giant monetizes via ads. (Verily has said the data is stored separately and not linked to other Google products, although the privacy policy does allow data to be shared with third parties including Salesforce for customer service purposes.)

In the UK the government has also been reported to be in discussions with telcos about mapping mobile users’ movements during the crisis — though not at an individual level. It was reported to have held an early meeting with tech companies to ask what resources they could contribute to the fight against COVID-19.

Elsewhere in Europe, Italy — which remains the European nation worst hit by the virus — has reportedly sought anonymized data from Facebook and local telcos that aggregates users’ movement to help with contact tracing or other forms of monitoring.

While there are clear public health imperatives to ensure populations are following instructions to reduce social contact, the prospect of Western democracies making like China and actively monitoring citizens’ movements raises uneasy questions about the long term impact of such measures on civil liberties.

Plus, if governments seek to expand state surveillance powers by directly leaning on the private sector to keep tabs on citizens it risks cementing a commercial exploitation of privacy — at a time when there’s been substantial push-back over the background profiling of web users for behavioral ads.

“Unprecedented levels of surveillance, data exploitation, and misinformation are being tested across the world,” warns civil rights campaign group Privacy International, which is tracking what it dubs the “extraordinary measures” being taken during the pandemic.

A couple of examples include telcos in Israel sharing location data with state agencies for COVID-19 contact tracing and the UK government tabling emergency legislation that relaxes the rules around intercept warrants.

“Many of those measures are based on extraordinary powers, only to be used temporarily in emergencies. Others use exemptions in data protection laws to share data. Some may be effective and based on advice from epidemiologists, others will not be. But all of them must be temporary, necessary, and proportionate,” it adds. “It is essential to keep track of them. When the pandemic is over, such extraordinary measures must be put to an end and held to account.”

At the same time employers may feel under pressure to be monitoring their own staff to try to reduce COVID-19 risks right now — which raises questions about how they can contribute to a vital public health cause without overstepping any legal bounds.

We talked to two lawyers from Linklaters to get their perspective on the rules that wrap extraordinary steps such as tracking citizens’ movements and their health data, and how European and US data regulators are responding so far to the coronavirus crisis.

Bear in mind it’s a fast-moving situation — with some governments (including the UK and Israel) legislating to extend state surveillance powers during the pandemic.

The interviews below have been lighted edited for length and clarity

Europe and the UK

Dr Daniel Pauly, technology, media & telecommunications partner at Linklaters in Frankfurt 

Data protection law has not been suspended. At least when it comes to Europe. So data protection law still applies — without any restrictions. This is the baseline on which we need to work and for which we need to start. Then we need to differentiate between what the government can do and what employers can do in particular.

It’s very important to understand that when we look at governments they do have the means to allow themselves a certain use of data. Because there are opening clauses, flexibility clauses, in particular in the GDPR, when it comes to public health concerns, cross-border threats.

By using the legislation process they may introduce further powers. To give you one example what the Germany government did to respond is they created a special law — the coronavirus notification regulation — we already have in place a law governing the use of personal data in respect of certain serious infections. And what they did is they simply added the coronavirus infection to that list, which now means that hospitals and doctors must notify the competent authority of any COVID-19 infection.

This is pretty far reaching. They need to transmit names, contact details, sex, date of birth and many other details to allow the competent authority to gather that data and to analyze that data.

Another important topic in that field is the use of telecommunications data — in particular mobile phone data. Efficient use of that data might be one of the reasons why they obviously were quite successful in China with reducing the threat from the virus.

In Europe the government may not simply use mobile phone data and movement data — they have to anonymize it first and this is what, in Germany and other European jurisdictions, happened — including the UK — that anonymized mobile phone data has been handed over to organizations who start analyzing that data to get a better view of how the people behave, how the people move and what they need to do in order to restrict further movement. Or to restrict public life. This is the view on the government at least in Europe and the UK.

Transparency obligations [related to government use of personal data] are stemming from the GDPR [General Data Protection Regulation]. When they would like to make use of mobile phone data this is the ePrivacy directive. This is not as transparent as the GDPR is and they did not succeed in replacing that piece of legislation by new regulation. So the ePrivacy directive gives again the various Member States, including the UK, the possibility to introduce further and more restrictive laws [for public health reasons].

[If Internet companies such as Google were to be asked by European governments to pass data on users for a coronavirus tracking purpose] it has to be taken into consideration that they have not included this in their records of processing activities — in their data protection notifications and information.

So it would be at least from a pure legal perspective it would be a huge step — and I’m wondering whether it would be feasible without the governments introducing special laws for that.

If [EU] governments would make use of private companies to provide them with data which has not been collected for such purposes — so that would be a huge step from the perspective of the GDPR at least. I’m not aware of something like this. I’ve certainly read there are discussions ongoing with Netflix to reduce the net traffic but I haven’t heard anything about making use of the data Google has.

I wouldn’t expect it in Europe — and particularly in Germany. Tracking people, tracking and monitoring what they are doing this is almost last resort — so I wouldn’t expect that in the next couple of weeks. And I hope then it’s over.

[So far], from my perspective, the European regulators have responded [to the coronavirus crisis] in a pretty reasonable manner by saying that, in particular, any response to the virus must be proportionate.

We still have that law in place and we need to consider that the data we’re talking about is health data — it’s the most protected data of all. Having said that there are some ways at least the GDPR is allowing the government and allowing employers to make use of that data. In particular when it comes to processing for substantial public interest. Or if it’s required for the purposes of preventive medicine or necessary for reasons of public interest.

So the legislator was wise enough to include clauses allowing the use of such data under certain circumstances and there are a number of supervisory authorities who already made public guidelines how to make use of these statutory permissions. And what they basically said was it always needs to be figured out on a case by case basis whether the data is really required in the specific case.

To give you an example, it was made clear that an employer may not ask an employee where he has been during his vacation — but he may ask have you been in any of the risk areas? And then the sufficient answer is yes or no. They do not need any further data. So it’s always [about approaching this] a smart way — by being smart you get the information you need; it’s not the flood gate suddenly opened.

You really need to look at the specific case and see how to get the data you need. Usually it’s a yes or no which is sufficient in the particular case.

The US

Caitlin Potratz Metcalf, senior U.S. associate at Linklaters and a Certified Information Privacy Professional (CIPP/US)

Even though you don’t have a structured privacy framework in the US — or one specific regulator that covers privacy — you’ve got some of the same issues. The FCC [Federal Communications Commission] will go after companies that take any action that is inconsistent with their privacy policies. And that would be misleading to consumers. Their initial focus is on consumer protection, not privacy, but in the last couple of years they’ve been wearing two hats. So there is a focus on privacy even though we don’t have a national privacy law [equivalent in scope to the EU’s GDPR] but it’s coming from a consumer protection point of view.

So, for example, the FCC back in February actually announced potential sanctions against four major telecoms companies int he US with respect to sharing data related to cell phone tracking — it wasn’t the geolocation in an app but actually pinging off cell towers — and sharing that data to third parties without proper safeguards. Because that wasn’t disclosed in their privacy policies.

They haven’t actually issued those fines but it was announced that they may pursue a $208M fine total against these four companies: AT&T, Verizon*, T-Mobile, Sprint… So they do take it very seriously about how that data is safeguarded, how it’s being shared. And the fact that we have a state of emergency doesn’t change that emphasis on consumer protection.

You’ll see the same is true for the Department of Health and Human Services (HHS) — that’s responsible for any medical or health data.

That is really limited towards entities that are covered entities under HIPAA [Health Insurance Portability and Accountability Act] or their business associates. So it doesn’t apply to everybody across the board. But if you are a hospital health plan provider, whether you’re an employer and you have a group health plan, an insurer, or a business associate supporting one of those covered entities then you have to comply with HIPAA to the extent you’re handling protected health information. And that’s a bit narrower than the definition of personal data that you’d have under GDPR.

So you’re really looking at identifying information for that patient: Their medical status, their birth date, address, things like that that might be very identifiable and related to the person. But you could share things that are more general. For example you have a middle aged man from this county who’s tested positive for COVID and is at XYZ facility being treated and his condition is stable. Or his condition is critical. So you could share that kind of level of detail — but not further.

And so HHS in February had issued a bullet stressing that you can’t set aside the privacy and security safeguards under HIPAA during an emergency. They stressed to all covered entities that you have to still comply with the law — sanctions are still in place. And to the extent that you do have to disclose some of the protected health information it has to be to the minimum extent necessary. And that can be disclosed either to other hospitals, to a regulator in order to help stem the spread of COVID and also in order to provide treatment to a patient. So they listed a couple of different exceptions how you can share that information but really stressing the minimum necessary.

The same would be true for an employer — like of a group health plan — if they’re trying to share information about employees but it’s going to be very narrow in what they can actually share. And they can’t just cite as an exception that it’s for the public health interest.. You don’t necessarily have to disclose what country they’ve been to it’s just have they been to a region that’s on a restricted list for travel. So it’s finding creative ways to relay the necessary information you need and if there’s anything less intrusive you’re required to go that route.

That said, just last week HHS also issued another bullet saying that they would waive HIPAA sanctions and penalties during the nationwide public health emergency. But it was only directed to hospitals — so it doesn’t apply to all covered entities.

They also issued another bulletin saying that they would lax restrictions on basically sharing data on using electronic means. So there’s very heightened restrictions on how you can share data electronically when it relates to medical and health information. And so this was allowing doctors to communicate by FaceTime or video chat and other methods that may not be encrypted or secure. Or communicate with patients etc. So they’re giving a waiver or just softening some of the restrictions related to transferring health data electronically.

So you can see it’s an evolving situation but they’ve still taken a very reserved and kind of conservative approach — really emphasizing that you do need to comply with your obligation to protect health data. So that’s where you see the strongest implementations. And then the FCC coming at it from a consumer protection point of view.

Going back to the point you made earlier about Google sharing data [with governments] — you could get there, it just depends on how their privacy policies are structured.

In terms of tracking individuals we don’t have a national statute like GDPR that would prevent that but it would also be very difficult to anonymize that data because it’s so tied to individuals — it’s like your DNA; you can map a person leaving home, going to work or school, going to a doctor’s office, coming back home — and it really does have very sensitive information and because of all the specific data points it means it’s very difficult to anonymize it and provide it in a format that wouldn’t violate someone’s privacy without their consent. And so while you may not need full consent in the US you would still need to have notice and transparency about the policies.

Then it would be slightly different if you’re a California resident — the degree that you need under the new California law [CCPA] to provide disclosures and give individuals the opportunity to opt out if you were to share their information. So in that case, where the telecoms companies are potentially going to be sued by the FCC for sharing data with third parties, that in particular would also violate the new California law if consumers weren’t given the opportunity to opt out of having their information sold.

So there’s a lot of different puzzle pieces that fit together since we have a patchwork quilt of data protection — depending on the different state and federal laws.

The government, I guess, could issue other mandates or regulations [to requisition telco tracking data for a COVID-related public health purpose] — I don’t know that they will. I would envisage more of a call to arms requesting support and assistance from the private sector. Not a mandate that you must share your data, given the way our government is structured. Unless things get incredibly dire I don’t really see a mandate to companies that they have to share certain data in order to be able to track patients.

[If Google makes use of health-related searches/queries to enrich user profiles it uses for commercial purposes] that in and of itself wouldn’t be protected health information.

Google is not a [HIPAA] covered entity. And depending on what type of support it’s providing for covered entities it may be in limited circumstances could be considered a business associate that could be subject to HIPAA but in the context of just collecting data on consumers it wouldn’t be governed by that.

So as long as it’s not doing anything outside the scope of what’s already in its privacy policies then it’s fine — so the fact that it’s collecting data based on searches that you run on Google that should be in the privacy policy anyway. It doesn’t need to be specific to the type of search that you’re running. So the fact that it’s looking up how to get COVID testing or treatment or what are the symptoms for COVID, things like that, that can all be tied to the data [it holds on users] and enriched. And that can also be shared and sold to third parties — unless you’re a California resident. They have a separate privacy policy for California residents… They just have to consistent with their privacy policy.

The interesting thing to me is maybe the approach that Asia has taken — where they have a lot more influence over the commercial sector and data tracking–  and so you actually have the regulator stepping in and doing more tracking, not just private companies. But private companies are able to provide tracking information.

You see it actually with Uber. They’ve issued additional privacy notices to consumers — saying that to the extent we become aware of a passenger that has had COVID or a driver, we will notify people who have come into contact with that Uber over a given time period. They’re trying to take the initiative to do their own tracking to protect workers and consumers.

And they can do that — they just have to be careful about how much detail they share about personal information. Not naming names of who was impacted [but rather saying something like] ‘in the last 24 hours you may have ridden in an Uber that was impacted or known to have an infected individual in the Uber’.

[When it comes to telehealth platforms and privacy protections] it depends if they’re considered a business associate of a covered entity. So they may not be a covered entity themselves but if they are a business associate supporting a covered entity — for example a hospital or a clinic or insurers sharing that data and relying on a telehealth platform. In that context they would be governed by some of the same privacy and security regulations under HIPAA.

Some of them are slightly different for a business associate compared to a covered entity but generally you step in the shoes of the covered entity if you’re handling the covered entity’s data and have the same restrictions apply to you.

Aggregate data wouldn’t be considered protected health information — so they could [for example] share a symptom heat map that doesn’t identify specific individuals or patients and their health data.

[But] standalone telehealth apps that are collecting data directly from the consumer are not covered by HIPAA.

That’s actually a big loophole in terms of consumer protection, privacy protections related to health data. You have the same issue for all the health fitness apps — whether it’s your fitbit or other health apps or if you’re pregnant and you have an app that tracks your maternity or your period or things like that. Any of that data that’s collected is not protected.

The only protections you have are whatever disclosures are in the privacy policies. And in them having to be transparent and act within that privacy policy. If they don’t they can face an enforcement action by the FCC but that is not regulated by the Department of Health and Human Services under HIPAA.

So it’s a very different approach than under GDPR which is much more comprehensive.

That’s not to say in the future we might see a tightening of restrictions on that but individuals are freely giving that information — and in theory should read the privacy policy that’s provided when you log into the app. But most users probably don’t read that and then that data can be shared with other third parties.

They could share it with a regulator, they could sell it to other third parties so long as they have the proper disclosure that they may sell your personal information or share it with third parties. It depends on how they’re privacy policy is crafted. So long as it covers those specific actions. And for California residents it’s a more specific test — there are more disclosures that are required.

For example the type of data that you’re collecting, the purpose that you’re collecting it for, how you intend to process that data, who you intend to share it with and why. So it’s tightened for California residents but for the rest of the US you just have to be consistent with your privacy policy and you aren’t required to have the same level of disclosures.

More sophisticated, larger companies, though, definitely are already complying with GDPR — or endeavouring to comply with the California law — and so they have more sophisticated, detailed privacy notices than are maybe required by law in the US. But they’re kind of operating on a global platform and trying to have a global privacy policy.

*Disclosure: Verizon is TechCrunch’s parent company

Israel passes emergency law to use mobile data for COVID-19 contact tracing

Israel has passed an emergency law to use mobile phone data for tracking people infected with COVID-19 including to identify and quarantine others they have come into contact with and may have infected.

The BBC reports that the emergency law was passed during an overnight sitting of the cabinet, bypassing parliamentary approval.

Israel also said it will step up testing substantially as part of its respond to the pandemic crisis.

In a statement posted to Facebook, prime minister Benjamin Netanyahu wrote: “We will dramatically increase the ability to locate and quarantine those who have been infected. Today, we started using digital technology to locate people who have been in contact with those stricken by the Corona. We will inform these people that they must go into quarantine for 14 days. These are expected to be large – even very large – numbers and we will announce this in the coming days. Going into quarantine will not be a recommendation but a requirement and we will enforce it without compromise. This is a critical step in slowing the spread of the epidemic.”

“I have instructed the Health Ministry to significantly increase the number of tests to 3,000 a day at least,” he added. “It is very likely that we will reach a higher figure, even up to 5,000 a day. To the best of my knowledge, relative to population, this is the highest number of tests in the world, even higher than South Korea. In South Korea, there are around 15,000 tests a day for a population five or six times larger than ours.”

On Monday an Israeli parliamentary subcommittee on intelligence and secret services discussed a government request to authorize Israel’s Shin Bet security service to assist in a national campaign to stop the spread of the novel coronavirus — but declined to vote on the request, arguing more time is needed to assess it.

Civil liberties campaigners have warned the move to monitor citizens’ movements sets a dangerous precedent.

According to WHO data, Israel had 200 confirmed cases of the coronavirus as of yesterday morning. Today the country’s health ministry reported cases had risen to 427.

Details of exactly how the tracking will work have not been released — but, per the BBC, the location data of people’s mobile devices will be collected from telcos by Israel’s domestic security agency and shared with health officials.

It also reports the health ministry will be involved in monitoring the location of infected people to ensure they are complying with quarantine rules — saying it can also send text messages to people who have come into contact with someone with COVID-19 to instruct them to self isolate.

In recent days Netanyahu has expressed frustration that Israel citizens have not been paying enough mind to calls to combat the spread of the virus via voluntary social distancing.

“This is not child’s play. This is not a vacation. This is a matter of life and death,” he wrote on Facebook. “There are many among you who still do not understand the magnitude of the danger. I see the crowds on the beaches, people having fun. They think this is a vacation.”

“According to the instructions that we issued yesterday, I ask you not leave your homes and stay inside as much as possible. At the moment, I say this as a recommendation. It is still not a directive but that can change,” he added.

Since the Israeli government’s intent behind the emergency mobile tracking powers is to combat the spread of COVID-19 by enabling state agencies to identify people whose movements need to be restricted to avoid them passing the virus to others, it seems likely law enforcement agencies will also be involved in enacting the measures.

That will mean citizens’ smartphones being not just a tool of mass surveillance but also a conduit for targeted containment — raising questions about the impact such intrusive measures might have on people’s willingness to carry mobile devices everywhere they go, even during a pandemic.

Yesterday the Wall Street Journal reported that the US government is considering similar location-tracking technology measures in a bid to check the spread of COVID-19 — with discussions ongoing between tech giants, startups and White House officials on measures that could be taken to monitor the disease.

Last week the UK government also held a meeting with tech companies to ask for their help in combating the coronavirus. Per Wired some tech firms offered to share data with the state to help with contact tracing — although, at the time, the government was not pursuing a strategy of mass restrictions on public movement. It has since shifted position.

Extra Crunch members get 60% off data privacy platform Osano

Extra Crunch is excited to announce a new community perk from Startup Battlefield alum Osano. Starting today, annual and two-year members of Extra Crunch can receive 60% off their data privacy management software for six months. You must be new to Osano to claim this offer. This coupon is only applicable to Osano’s self-service plans. Osano is an easy-to-use data privacy platform that instantly helps your website become compliant with laws such as GDPR and CCPA. Osano works to keep you out of trouble and monitors all of the vendors you share data with — so you don’t have to. Connect the data dots to see what’s hiding with Osano here.  

You can sign up for Extra Crunch and claim this deal here.

Extra Crunch is a membership program from TechCrunch that features weekly investor surveys, how-tos and interviews with experts on company building, analysis of IPOs and late-stage companies, an experience on TechCrunch.com that’s free of banner ads, discounts on TechCrunch events and several Partner Perks like the one mentioned in this article. We’re democratizing information for startups, and we’d love to have you join our community.

Sign up for Extra Crunch here.

New annual and two-year Extra Crunch members will receive details on how to claim the perk in the welcome email. The welcome email is sent after signing up for Extra Crunch.

If you are already an annual or two-year Extra Crunch member, you will receive an email with the offer at some point over the next 24 hours. If you are currently a monthly Extra Crunch subscriber and want to upgrade to annual in order to claim this deal, head over to the “account” section on TechCrunch.com and click the “upgrade” button.  

This is one of more than a dozen Partner Perks we’ve launched for annual Extra Crunch members. Other community perks include a 20% discount on TechCrunch events, 90% off an annual DocSend plan and an opportunity to claim $1,000 in AWS credits. For a full list of perks from partners, head here.

If there are other community perks you want to see us add, please let us know by emailing travis@techcrunch.com.

Sign up for an annual Extra Crunch membership today to claim this community perk. You can purchase an annual Extra Crunch membership here.

Disclosure: This offer is provided as a partnership between TechCrunch and Osano, but it is not an endorsement from the TechCrunch editorial team. TechCrunch’s business operations remain separate to ensure editorial integrity. 

To make locks touchless, Proxy bluetooth ID raises $42M

We need to go hands-off in the age of coronavirus. That means touching fewer doors, elevators, and sign-in iPads. But once a building is using phone-based identity for security, there’s opportunities to speed up access to WIFI networks and printers, or personalize conference rooms and video call set-ups. Keyless office entry startup Proxy wants to deliver all of this while keeping your phone in your pocket.

The door is just a starting point” Proxy co-founder and CEO Denis Mars tells me. “We’re . . . empowering a movement to take back control of our privacy, our sense of self, our humanity, our individuality.”

With the contagion concerns and security risks of people rubbing dirty, cloneable, stealable key cards against their office doors, investors see big potential in Proxy. Today it’s announcing here a $42 million Series B led by Scale Venture Partners with participation from former funders Kleiner Perkins and Y Combinator plus new additions Silicon Valley Bank and West Ventures.

The raise brings Proxy to $58.8 million in funding so it can staff up at offices across the world and speed up deployments of its door sensor hardware and access control software. “We’re spread thin” says Mars. “Part of this funding is to try to grow up as quickly as possible and not grow for growth sake. We’re making sure we’re secure, meeting all the privacy requirements.”

How does Proxy work? Employers get their staff to install an app that knows their identity within the company, including when and where they’re allowed entry. Buildings install Proxy’s signal readers, which can either integrate with existing access control software or the startup’s own management dashboard.

Employees can then open doors, elevators, turnstiles, and garages with a Bluetooth low-energy signal without having to even take their phone out. Bosses can also opt to require a facial scan or fingerprint or a wave of the phone near the sensor. Existing keycards and fobs still work with Proxy’s Pro readers. Proxy costs about $300 to $350 per reader, plus installation and a $30 per month per reader subscription to its management software.

Now the company is expanding access to devices once you’re already in the building thanks to its SDK and APIs. Wifi router-makers are starting to pre-provision their hardware to automatically connect the phones of employees or temporarily allow registered guests with Proxy installed — no need for passwords written on whiteboards. Its new Nano sensors can also be hooked up to printers and vending machines to verify access or charge expense accounts. And food delivery companies can add the Proxy SDK so couriers can be granted the momentary ability to open doors when they arrive with lunch.

Rather than just indiscriminately beaming your identity out into the world, Proxy uses tokenized credentials so only its sensors know who you are. Users have to approve of new networks’ ability to read their tokens, Proxy has SOC-2 security audit certification, and complies with GDPR. “We feel very strongly about where the biometrics are stored . . . they should stay on your phone” says Mars.

Yet despite integrating with the technology for two-factor entry unlocks, Mars says “We’re not big fans of facial recognition. You don’t want every random company having your face in their database. The face becomes the password you were supposed to change every 30 days.”

Keeping your data and identity safe as we see an explosion of Internet Of Things devices was actually the impetus for starting Proxy. Mars had sold his teleconferencing startup Bitplay to Jive Software where he met his eventually co-founder Simon Ratner, who’d joined after his video annotation startup  Omnisio was acquired by YouTube. Mars was frustrated about every IoT lightbulb and appliance wanting him to download an app, set up a profile, and give it his data.

The duo founded Proxy in 2013 as a universal identity signal. Today it has over 60 customers. While other apps want you to constantly open them, Proxy’s purpose is to work silently in the background and make people more productive. “We believe the most important technologies in the world don’t seek your attention. They work for you, they empower you, and they get out of the way so you can focus your attention on what matters most — living your life.”

Now Proxy could actually help save lives. “The nature of our product is contactless interactions in commercial buildings and workplaces so there’s a bit of an unintended benefit that helps prevent the spread of the virus” Mars explains. “We have seen an uptick in customers starting to set doors and other experiences in longer-range hands-free mode so that users can walk up to an automated door and not have to touch the handles or badge/reader every time.”

The big challenge facing Proxy is maintaining security and dependability since it’s a mission-critical business. A bug or outage could potentially lock employees out of their workplace (when they eventually return from quarantine). It will have to keep hackers out of employee files. Proxy needs to stay ahead of access control incumbents like ADT and Honeywell as well as smaller direct competitors like $10 million-funded Nexkey and $28 million-funded Openpath.

Luckily, Proxy has found a powerful growth flywheel. First an office in a big building gets set up, then they convince the real estate manager to equip the lobby’s turnstiles and elevators with Proxy. Other tenants in the building start to use it, so they buy Proxy for their office. Then they get their offices in other cities on board…starting the flywheel again. That’s why Proxy is doubling down on sales to commercial real estate owners.

The question is when Proxy will start knocking on consumers’ doors. While leveling up into the enterprise access control software business might be tough for home smartlock companies like August, Proxy could go down market if it built more physical lock hardware. Perhaps we’ll start to get smart homes that know who’s home, and stop having to carry pointy metal sticks in our pockets.

India used facial recognition tech to identify 1,100 individuals at a recent riot

Law enforcement agencies in India used facial recognition to identify more than 1,100 individuals who took part in communal violence in the national capital last month, a top minister said in the lower house of the parliament on Wednesday.

In what is the first admission of its kind in the country, Amit Shah, India’s home minister, said the law enforcement agencies deployed a facial recognition system, and fed it with images from government-issued identity cards, including 12-digit Aadhaar that has been issued to more than a billion Indians and driving licenses, “among other databases,” to identify alleged culprits in the communal violence in northeast Delhi on February 25 and 26.

“This is a software. It does not see faith. It does not see clothes. It only sees the face and through the face the person is caught,” said Shah, responding to an individual who had urged New Delhi to not drag innocent people into the facial surveillance.

The admission further demonstrates how the Indian government has rushed to deploy facial recognition technology in the absence of regulation overseeing its usage. Critics have urged the government to hold consultations and formulate a law before deploying the technology.

“The use of Aadhaar for this purpose without any judicial authorisation violates the judgement of the Supreme Court in KS Puttaswamy v. UoI (2019),” said New Delhi-based digital rights advocacy group Internet Freedom Foundation, which also questioned the sophistication of the facial recognition system.

The facial recognition system that the government used in Delhi was first acquired by the Delhi Police to identify missing children. In 2019, the system had an accuracy rate of 1% and it even failed to distinguish between boys and girls, the group said.

“All of this is being done without any clear underlying legal authority and is in clear violation of the Right to Privacy judgment (that the Indian apex court upheld in 2017),” said Apar Gupta, executive director at IFF. “Facial recognition technology is still evolving and the risks of such evolutionary tech being used in policing are significant,” said Gupta.

Several law enforcement agencies have been using facial recognition for years now. In January and early February, police in New Delhi and the northern state of Uttar Pradesh used the technology during protests against a new citizenship law that critics say marginalises Muslims.

Your VPN or ad-blocker app could be collecting your data

The underpinnings of how app store analytics platforms operate were exposed this week by BuzzFeed, which uncovered the network of mobile apps used by a popular analytics firm Sensor Tower to amass app data. The company had operated at least 20 apps, including VPNs and ad blockers, whose main purpose was to collect app usage data from end users in order to make estimations about app trends and revenues. Unfortunately, these sorts of data collection apps are not new — nor unique to Sensor Tower’s operation.

Sensor Tower was found to operate apps such as Luna VPN, for example, as well as Free and Unlimited VPN, Mobile Data, and Adblock Focus, among others. After BuzzFeed reached out, Apple removed Adblock Focus and Google removed Mobile Data. Others are still being investigated, the report said.

Apps’ collection of usage data has been an ongoing issue across the app stores.

Facebook and Google have both operated such apps, not always transparently, and Sensor Tower’s key rival App Annie continues to do the same today.

Facebook

For Facebook, its 2013 acquisition of VPN app maker Onavo for years served as a competitive advantage. The traffic through the app gave Facebook insight into what other social applications were growing in popularity — so Facebook could either clone their features or acquire them outright. When Apple finally booted Onavo from the App Store half a decade later, Facebook simply brought back the same code in a new wrapper — then called the Facebook Research app. This time, it was a bit more transparent about its data collection, as the Research app was actually paying for the data.

But Apple kicked that app out, too. So Facebook last year launched Study and Viewpoints to further its market research and data collection efforts. These apps are still live today.

Google

Google was also caught doing something similar by way of its Screenwise Meter app, which invited users 18 and up (or 13 if part of a family group) to download the app and participate in the panel. The app’s users allowed Google to collect their app and web usage in exchange for gift cards. But like Facebook, Google’s app used Apple’s Enterprise Certificate program to work — a violation of Apple policy that saw the app removed, again following media coverage. Screenwise Meter returned to the App Store last year and continues to track app usage, among other things, with panelists’ consent.

App Annie

App Annie, a firm that directly competes with Sensor Tower, has acquired mobile data companies and now operates its own set of apps to track app usage under those brands.

In 2014, App Annie bought Distimo, and as of 2016 has run Phone Guardian, a “secure Wi-Fi and VPN” app, under the Distimo brand.

The app discloses its relationship with App Annie in its App Store description, but remains vague about its true purpose:

“Trusted by more than 1 million users, App Annie is the leading global provider of mobile performance estimates. In short, we help app developers build better apps. We build our mobile performance estimates by learning how people use their devices. We do this with the help of this app.”

In 2015, App Annie acquired Mobidia. Since 2017, it has operated a real-time data usage monitor My Data Manager under that brand, as well. The App Store description only offers the same vague disclosure, which means users aren’t likely aware of what they’re agreeing to.

Disclosure?

The problem with apps like App Annie’s and Sensor Tower’s is that they’re marketed as offering a particular function, when their real purpose for existing is entirely another.

The app companies’ defense is that they do disclose and require consent during onboarding. For example, Sensor Tower apps explicitly tell users what is collected and what is not:

 

App Annie’s app offers a similar disclosure, and takes the extra step of identifying the parent company by name:

Despite these opt-ins, end users may still not understand that their VPN app is actually tied to a much larger data collection operation. After all, App Annie and Sensor Tower aren’t household names (unless you’re an app publisher or marketer.)

Apple and Google’s responsibility 

Apple and Google, let’s be fair, are also culpable here.

Of course, Google is more pro-data collection because of the nature of its own business as an advertising-powered company. (It even tracks users in the real-world via the Google Maps app.)

Apple, meanwhile, markets itself as a privacy-focused company, so is deserving of increased scrutiny.

It seems unfathomable that, following the Onavo scandal, Apple wouldn’t have taken a closer look into the VPN app category to ensure its apps were compliant with its rules and transparent about the nature of their businesses. In particular, it seems Apple would have paid close attention to apps operated by companies in the app store intelligence business, like App Annie and its subsidiaries.

Apple is surely aware of how these companies acquire data — it’s common industry knowledge. Plus, App Annie’s acquisitions were publicly disclosed.

But Apple is conflicted. It wants to protect app usage and user data (and be known for protecting such data) by not providing any broader app store metrics of its own. However, it also knows that app publishers need such data to operate competitively on the App Store. So instead of being proactive about sweeping the App Store for data collection utilities, it remains reactive by pulling select apps when the media puts them on blast, as BuzzFeed’s report has since done. That allows Apple to maintain a veil of innocence.

But pulling user data directly covertly is only one way to operate. As Facebook and Google have since realized, it’s easier to run these sorts of operations on the App Store if the apps just say, basically, “this is a data collection app,” and/or offer payment for participation — as do many marketing research panels. This is a more transparent relationship from a consumer’s perspective too, as they know they’re agreeing to sell their data.

Meanwhile, Sensor Tower and App Annie competitor Apptopia says it tested then scrapped its own an ad blocker app around six years ago, but claims it never collected data with it. It now favors getting its data directly from its app developer customers.

“We can confidently state that 100% of the proprietary data we collect is from shared App Analytics Accounts where app developers proactively and explicitly share their data with us, and give us the right to use it for modeling,” stated Apptopia Co-founder and COO, Jonathan Kay. “We do not collect any data from mobile panels, third-party apps, or even at the user/device level.”

This isn’t necessarily better for end users, as it further obscures the data collection and sharing process. Consumers don’t know which app developers are sharing this data, what data is being shared, or how it’s being utilized. (Fortunately for those who do care, Apple allows users to disable the sharing of diagnostic and usage data from within iOS Settings.)

Data collection done by app analytics firms is only one of many, many ways that apps leak data, however.

In fact, many apps collect personal data — including data that’s far more sensitive than anonymized app usage trends — by way of their included SDKs (software development kits). These tools allow apps to share data with numerous technology companies including ad networks, data brokers, and aggregators, both large and small. It’s not illegal and mainstream users probably don’t know about this either.

Instead, user awareness seems to crop up through conspiracy theories, like “Facebook is listening through the microphone,” without realizing that Facebook collects so much data it doesn’t really need to do so. (Well, except when it does).

In the wake of BuzzFeed’s reporting, Sensor Tower says it’s “taking immediate steps to make Sensor Tower’s connection to our apps perfectly clear, and adding even more visibility around the data their users share with us.”

Apple, Google, and App Annie have been asked for comment. Google isn’t providing an official comment. Apple didn’t respond. App Annie did not have a comment ready by deadline.

Sensor Tower’s full statement is below:

Our business model is predicated on high-level, macro app trends. As such, we do not collect or store any personally identifiable information (PII) about users on our servers or elsewhere. In fact, based on the way our apps are designed, such data is separated before we could possibly view or interact with it, and all we see are ad creatives being served to users. What we do store is extremely high level, aggregated advertising data that may demonstrate trends that we share with customers.

Our privacy policy follows best practices and makes our data use clear. We want to reiterate that our apps do not collect any PII, and therefore it cannot be shared with any other entity, Sensor Tower or otherwise. We’ve made this very clear in our privacy policy, which users actively opt into during the apps’ onboarding processes after being shown an unambiguous disclaimer detailing what data is shared with us. As a routine matter, and as our business evolves, we’ll always take a privacy-centric approach to new features to help ensure that any PII remains uncollected and is fully safeguarded.

Based on the feedback we’ve received, we’re taking immediate steps to make Sensor Tower’s connection to our apps perfectly clear, and adding even more visibility around the data their users share with us.

 

 

Daily Crunch: French data watchdog investigates Criteo

Criteo faces a privacy investigation, an e-discovery startup raises $62 million and hackers hack other hackers. Here’s your Daily Crunch for March 10, 2020.

1. Adtech giant Criteo is being investigated by France’s data watchdog

Criteo is under investigation by the French data protection watchdog, the CNIL, following a complaint filed by privacy rights campaign group Privacy International.

Back in November 2018, a few months after GDPR (Europe’s updated data protection framework) came into force, Privacy International filed complaints against a number of companies operating in the space — including Criteo. A subsequent investigation by the rights group found adtech trackers on mental health websites sharing sensitive user data for ad targeting purposes.

2. Everlaw announces $62M Series C to continue modernizing legal discovery

Everlaw is bringing modern data management, visualization and machine learning to e-discovery, the process in which legal entities review large amounts of evidence to build a case. CapitalG (Alphabet’s growth equity investment fund) and Menlo Ventures led the round.

3. Hackers are targeting other hackers by infecting their tools with malware

Cybereason’s Amit Serper found that the attackers in this years-long campaign are taking existing hacking tools and injecting a powerful remote-access trojan. When the tools are opened, the hackers gain full access to the target’s computer.

4. Amazon creates $5M relief fund to aid small businesses in Seattle impacted by coronavirus outbreak

The fund will provide cash grants to local small businesses in need during the novel coronavirus outbreak. The money will be directed toward small businesses with fewer than 50 employees or less than $7 million in annual revenue, and with a physical presence within a few blocks of Regrade and South Lake Union office buildings.

5. Stitch Fix’s sharp decline signals high growth hurdles for tech-enabled startups

Shares of Stitch Fix, a digitally-enabled “styling service,” are off sharply this morning after its earnings failed to excite public market investors. The firm, worth over $29 per share as recently as February, opened today worth just $14.75 per share. (Extra Crunch membership required.)

6. Facebook Stories tests cross-posting to its pet, Instagram

Facebook’s latest colonization of Instagram has begun — the social network is testing the option to cross-post Stories to Instagram, instead of just vice-versa.

7. Sequoia is giving away $21M to a payments startup it recently funded as it walks away from deal

Sequoia Capital has, for the first time in its history, parted ways with a newly funded company (Finix) over a purported conflict of interest and, almost more shockingly, handed back its board seat, its information rights, its shares and its full investment.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here.