Health Care Product Managers Deal With Privacy Issues

Product managers have to deal with patient data privacy laws
Product managers have to deal with patient data privacy laws
Image Credit: Stock Catalog

Every product manager wants to be in charge of a product that customers want. However, it turns out that some of us have to deal with situations in which our customers want what we have, but they are also afraid to give us what we need in order to deliver our product to them. A great example of this happening in the health care market. Customers are very aware of the data privacy issues and so they are hesitant to use the health care products that the product managers are providing them with.


The Challenge Poised By Health Data

Most consumers have a lot of health care data. It’s just that it is spread all over the place – at hospitals, in different doctor’s offices, and even in insurance claims that have been filed. A number of different companies have come to the realization that all of this health care data is out there and they want to help bring it all together into one place. As you can well imagine, this is opening up a whole host of data privacy and control questions.

Product managers at these companies are in the process of introducing online products that customers can use to bring together their health data that is currently being stored in a number of different locations. The goal of these products is to allow customers to consolidate their health care information such as diagnostics and lab results in a way that will allow the customer to access it easily via either their smartphones or their computers.

There are some significant issues that these product managers are facing. The data that their products need is both highly personal and potentially very valuable to companies such as drug makers. Currently, the Federal health privacy law which is known as the Health Insurance Portability and Accountability Act (HIPPA) places requirements on companies to protect a customer’s health care data. This law applies to physicians and health-care companies. This includes hospitals and insurers and any third parties who work for them.


The Privacy Issue

However, things may be changing in the health care data market. When a tech firm gets health care data directly from a customer, they may not be subject to the HIPPA rules. Note that this may include when a tech firm gets customer health care data from a doctor or a hospital on the customer’s behalf. Instead, these firms and their product managers are being regulated by the Federal Trade Commission who tends to focus on whether or not companies are abiding by their own privacy policies.

A key point that these product managers are going to have to deal with is that most customers believe that all of their health information is covered under HIPPA. It turns out that it isn’t. What is going to have to happen is that product managers who work for technology companies that are not regulated by HIPPA are going to have to say that they use their customer’s health data only with their permission. A big question is just exactly how much detail a customer should get. Additionally, many product managers believe that customers are going to want to have complete control over their personal data including getting paid if the data is ever sold.

The products that the product managers are in the process of introducing will pull customer health data from multiple health-care providers. The products will focus on seven different areas of health including allergies and prescriptions. Data will also be pulled from lab companies. Customers will be able to access their data though both an app and a website. The goal of these products is to allow customers to link in the different sources of their health-care information that is relevant to their personal care while providing meaning and context to their information.


What All Of This Means For You

Health care is a huge market and to be a product manager who works in this area means that you might be responsible for a very popular product. However, in this age in which we are living, customers are very aware that their personal health-care data is a valuable resource that many different people would like to get their hands on. This is why customers are often not willing to share it – even with product managers who have health-care products. How best to deal with this issue?

Customers have a lot of health-care data that is distributed over many different providers. Product managers want to bring all of this data together and allow customers to manage it. These online products would pull data from many different sources and allow customers to access it via their computers and smartphones. Customer health care data has traditionally been protected by the HIPP laws. However, if customers give their data to tech firms, it may not be protected by HIPPA. Product managers are going to have to treat customer health-care data very carefully and make sure that customers have complete control over it. The goal of these products is going to be to provide customers with complete control over their health-care data.

It sure seems as though customers would like to have more access to and control over their personal health-care information. However, this is an area that is fraught with danger. If the product managers somehow make a mistake and customers start to believe that their data is being used in ways that they don’t approve of, then these new health-care products could quickly fail. These product managers have a tough task ahead of them if they want to make their products produce a healthy return!


– Dr. Jim Anderson Blue Elephant Consulting –
Your Source For Real World Product Management Skills™


Question For You: How can health-care data product managers convince their customers that their data is safe with them?


Click here to get automatic updates when The Accidental Product Manager Blog is updated.
P.S.: Free subscriptions to The Accidental Product Manager Newsletter are now available. It’s your product – it’s your career. Subscribe now: Click Here!

What We’ll Be Talking About Next Time

One of the most hotly contested markets right now is the home delivery of food. In just about every major city, you can see multiple options for getting food delivered to you when you enter any restaurant. However, for those potential customers who live in smaller towns and cities, home delivery of food has not been an option – until now. Product managers are starting to view the smaller towns as the next battlefield now that they are already entrenched in the big cities. How will they change their product development definition in order to capture these new customers?

The post Health Care Product Managers Deal With Privacy Issues appeared first on The Accidental Product Manager.

Following Apple’s launch of privacy labels, Google to add a ‘safety’ section in Google Play

Months after Apple’s App Store introduced privacy labels for apps, Google announced its own mobile app marketplace, Google Play, will follow suit. The company today pre-announced its plans to introduce a new “safety” section in Google Play, rolling out next year, which will require app developers to share what sort of data their apps collect, how it’s stored, and how it’s used.

For example, developers will need to share what sort of personal information their apps collect, like users’ names or emails, and whether it collects information from the phone, like the user’s precise location, their media files or contacts. Apps will also need to explain how the app uses that information — for example, for enhancing the app’s functionality or for personalization purposes.

Developers who already adhere to specific security and privacy practices will additionally be able to highlight that in their app listing. On this front, Google says it will add new elements that detail whether the app uses security practices like data encryption; if the app follows Google’s Families policy, related to child safety; if the app’s safety section has been verified by an independent third party; whether the app needs data to function or allows users to choose whether or not share data; and whether the developer agrees to delete user data when a user uninstalls the app in question.

Apps will also be required to provide their privacy policies.

While clearly inspired by Apple’s privacy labels, there are several key differences. Apple’s labels focus on what data is being collected for tracking purposes and what’s linked to the end user. Google’s additions seem to be more about whether or not you can trust the data being collected is being handled responsibility, by allowing the developer to showcase if they follow best practices around data security, for instance. It also gives the developer a way to make a case for why it’s collecting data right on the listing page itself. (Apple’s “ask to track” pop-ups on iOS now force developers to beg inside their apps for access user data).

Another interesting addition is that Google will allow the app data labels to be independently verified. Assuming these verifications are handled by trusted names, they could help to convey to users that the disclosures aren’t lies. One early criticism of Apple’s privacy labels was that many were providing inaccurate information — and were getting away with it, too.

Google says the new features will not roll out until Q2 2022, but it wanted to announce now in order to give developers plenty of time to prepare.

Image Credits: Google

There is, of course, a lot of irony to be found in an app privacy announcement from Google.

The company was one of the longest holdouts on issuing privacy labels for its own iOS apps, as it scrambled to review (and re-review, we understand) the labels’ content and disclosures. After initially claiming its labels would roll out “soon,” many of Google’s top apps then entered a lengthy period where they received no updates at all, as they were no longer compliant with App Store policies.

It took Google months after the deadline had passed to provide labels for its top apps. And when it did, it was mocked by critics — like privacy-focused search engine DuckDuckGo — for how much data apps like Chrome and the Google app collect.

Google’s plan to add a safety section of its own to Google Play gives it a chance to shift the narrative a bit.

It’s not a privacy push, necessarily. They’re not even called privacy labels! Instead, the changes seem designed to allow app developers to better explain if you can trust their app with your data, rather than setting the expectation that the app should not be collecting data in the first place.

How well this will resonate with consumers remains to be seen. Apple has made a solid case that it’s a company that compares about user privacy, and is adding features that put users in control of their data. It’s a hard argument to fight back against — especially in an era that’s seen too many data breaches to count, careless handling of private data by tech giants, widespread government spying, and a creepy adtech industry that grew to feel entitled to user data collection without disclosure.

Google says when the changes roll out, non-compliant apps will be required to fix their violations or become subject to policy enforcement. It hasn’t yet detailed how that process will be handled, or whether it will pause app updates for apps in violation.

The company noted its own apps would be required to share this same information and a privacy policy, too.

 

Peloton’s leaky API let anyone grab rider’s private account data

Halfway through my Monday afternoon workout last week, I got a message from a security researcher with a screenshot of my Peloton account data.

My Peloton profile is set to private and my friend’s list is deliberately zero, so nobody can view my profile, age, city, or workout history. But a bug allowed anyone to pull users’ private account data directly from Peloton’s servers, even with their profile set to private.

Peloton, the at-home fitness brand synonymous with its indoor stationary bike, has more than three million subscribers. Even President Biden is even said to own one. The exercise bike alone costs upwards of $1,800, but anyone can sign up for a monthly subscription to join a broad variety of classes.

As Biden was inaugurated (and his Peloton moved to the White House — assuming the Secret Service let him), Jan Masters, a security researcher at Pen Test Partners, found he could make unauthenticated requests to Peloton’s API for user account data without it checking to make sure the person was allowed to request it. (An API allows two things to talk to each other over the internet, like a Peloton bike and the company’s servers storing user data.)

But the exposed API let him — and anyone else on the internet — access a Peloton user’s age, gender, city, weight, workout statistics, and if it was the user’s birthday, details that are hidden when users’ profile pages are set to private.

Masters reported the leaky API to Peloton on January 20 with a 90-day deadline to fix the bug, the standard window time that security researchers give to companies to fix bugs before details are made public.

But that deadline came and went, the bug wasn’t fixed, and Masters hadn’t heard back from the company, aside from an initial email acknowledging receipt of the bug report. Instead, Peloton only restricted access to its API to its members. But that just meant anyone could sign up with a monthly membership and get access to the API again.

TechCrunch contacted Peloton after the deadline lapsed to ask why the vulnerability report had been ignored, and Peloton confirmed yesterday that it had fixed the vulnerability. (TechCrunch held this story until the bug was fixed in order to prevent misuse.)

Peloton spokesperson Amelise Lane provided the following statement:

It’s a priority for Peloton to keep our platform secure and we’re always looking to improve our approach and process for working with the external security community. Through our Coordinated Vulnerability Disclosure program, a security researcher informed us that he was able to access our API and see information that’s available on a Peloton profile. We took action, and addressed the issues based on his initial submissions, but we were slow to update the researcher about our remediation efforts. Going forward, we will do better to work collaboratively with the security research community and respond more promptly when vulnerabilities are reported. We want to thank Ken Munro for submitting his reports through our CVD program and for being open to working with us to resolve these issues.

Masters has since put up a blog post explaining the vulnerabilities in more detail.

Munro, who founded Pen Test Partners, told TechCrunch: “Peloton had a bit of a fail in responding to the vulnerability report, but after a nudge in the right direction, took appropriate action. A vulnerability disclosure program isn’t just a page on a website; it requires coordinated action across the organisation.”

But questions remain for Peloton. When asked repeatedly, the company declined to say why it had not responded to Masters’ vulnerability report. It’s also not known if anyone maliciously exploited the vulnerabilities, such as mass-scraping account data.

Facebook, LinkedIn, and Clubhouse have all fallen victim to scraping attacks that abuse access to APIs to pull in data about users on their platforms. But Peloton declined to confirm if it had logs to rule out any malicious exploitation of its leaky API.

Disqus facing $3M fine in Norway for tracking users without consent

Disqus, a commenting plugin that’s used by a number of news websites and which can share user data for ad targeting purposes, has got into hot water in Norway for tracking users without their consent.

The local data protection agency said today it has notified the U.S.-based company of an intent to fine it €2.5 million (~$3M) for failures to comply with requirements in Europe’s General Data Protection Regulation (GDPR) on accountability, lawfulness and transparency.

Disqus’ parent, Zeta Global, has been contacted for comment.

Datatilsynet said it acted following a 2019 investigation in Norway’s national press — which found that default settings buried in the Disqus’ plug-in opted sites into sharing user data on millions of users in markets including the U.S.

And while in most of Europe the company was found to have applied an opt-in to gather consent from users to be tracked — likely in order to avoid trouble with the GDPR — it appears to have been unaware that the regulation applies in Norway.

Norway is not a member of the European Union but is in the European Economic Area — which adopted the GDPR in July 2018, slightly after it came into force elsewhere in the EU. (Norway transposed the regulation into national law also in July 2018.)

The Norwegian DPA writes that Disqus’ unlawful data-sharing has “predominantly been an issue in Norway” — and says that seven websites are affected: NRK.no/ytring, P3.no, tv.2.no/broom, khrono.no, adressa.no, rights.no and document.no.

“Disqus has argued that their practices could be based on the legitimate interest balancing test as a lawful basis, despite the company being unaware that the GDPR applied to data subjects in Norway,” the DPA’s director-general, Bjørn Erik Thon, goes on.

“Based on our investigation so far, we believe that Disqus could not rely on legitimate interest as a legal basis for tracking across websites, services or devices, profiling and disclosure of personal data for marketing purposes, and that this type of tracking would require consent.”

“Our preliminary conclusion is that Disqus has processed personal data unlawfully. However, our investigation also discovered serious issues regarding transparency and accountability,” Thon added.

The DPA said the infringements are serious and have affected “several hundred thousands of individuals”, adding that the affected personal data “are highly private and may relate to minors or reveal political opinions”.

“The tracking, profiling and disclosure of data was invasive and nontransparent,” it added.

The DPA has given Disqus until May 31 to comment on the findings ahead of issuing a fine decision.

Publishers reminded of their responsibility

Datatilsynet has also fired a warning shot at local publishers who were using the Disqus platform — pointing out that website owners “are also responsible under the GDPR for which third parties they allow on their websites”.

So, in other words, even if you didn’t know about a default data-sharing setting that’s not an excuse because it’s your legal responsibility to know what any code you put on your website is doing with user data.

The DPA adds that “in the present case” it has focused the investigation on Disqus — providing publishers with an opportunity to get their houses in order ahead of any future checks it might make.

Norway’s DPA also has some admirably plain language to explain the “serious” problem of profiling people without their consent. “Hidden tracking and profiling is very invasive,” says Thon. “Without information that someone is using our personal data, we lose the opportunity to exercise our rights to access, and to object to the use of our personal data for marketing purposes.

“An aggravating circumstance is that disclosure of personal data for programmatic advertising entails a high risk that individuals will lose control over who processes their personal data.”

Zooming out, the issue of adtech industry tracking and GDPR compliance has become a major headache for DPAs across Europe — which have been repeatedly slammed for failing to enforce the law in this area since GDPR came into application in May 2018.

In the UK, for example (which transposed the GDPR before Brexit so still has an equivalent data protection framework for now), the ICO has been investigating GDPR complaints against real-time bidding’s (RTB) use of personal data to run behavioral ads for years — yet hasn’t issued a single fine or order, despite repeatedly warning the industry that it’s acting unlawfully.

The regulator is now being sued by complainants over its inaction.

Ireland’s DPC, meanwhile — which is the lead DPA for a swathe of adtech giants which site their regional HQ in the country — has a number of open GDPR investigations into adtech (including RTB). But has also failed to issue any decisions in this area almost three years after the regulation begun being applied.

Its lack of action on adtech complaints has contributed significantly to rising domestic (and international) pressure on its GDPR enforcement record more generally, including from the European Commission. (And it’s notable that the latter’s most recent legislative proposals in the digital arena include provisions that seek to avoid the risk of similar enforcement bottlenecks.)

The story on adtech and the GDPR looks a little different in Belgium, though, where the DPA appears to be inching toward a major slap-down of current adtech practices.

A preliminary report last year by its investigatory division called into question the legal standard of the consents being gathered via a flagship industry framework, designed by the IAB Europe. This so-called ‘Transparency and Consent’ framework (TCF) was found not to comply with the GDPR’s principles of transparency, fairness and accountability, or the lawfulness of processing.

A final decision is expected on that case this year — but if the DPA upholds the division’s findings it could deal a massive blow to the behavioral ad industry’s ability to track and target Europeans.

Studies suggest Internet users in Europe would overwhelmingly choose not to be tracked if they were actually offered the GDPR standard of a specific, clear, informed and free choice, i.e. without any loopholes or manipulative dark patterns.

What3Words sends legal threat to a security researcher for sharing an open-source alternative

A U.K. company behind digital addressing system What3Words has sent a legal threat to a security researcher for offering to share an open-source software project with other researchers, which What3Words claims violate its copyright.

Aaron Toponce, a systems administrator at XMission, received a letter on Thursday from a law firm representing What3Words, requesting that he delete tweets related to the open source alternative, WhatFreeWords. The letter also demands that he disclose to the law firm the identity of the person or people with whom he had shared a copy of the software, agree that he would not make any further copies of the software, and to delete any copies of the software he had in his possession.

The letter gave him until May 7 to agree, after which What3Words would “waive any entitlement it may have to pursue related claims against you,” a thinly-veiled threat of legal action.

“This is not a battle worth fighting,” he said in a tweet. Toponce told TechCrunch that he has complied with the demands, fearing legal repercussions if he didn’t. He has also asked the law firm twice for links to the tweets they want deleting but has not heard back. “Depending on the tweet, I may or may not comply. Depends on its content,” he said.

The legal threat sent to Aaron Toponce. (Image: supplied)

U.K.-based What3Words divides the entire world into three-meter squares and labels each with a unique three-word phrase. The idea is that sharing three words is easier to share on the phone in an emergency than having to find and read out their precise geographic coordinates.

But security researcher Andrew Tierney recently discovered that What3Words would sometimes have two similarly-named squares less than a mile apart, potentially causing confusion about a person’s true whereabouts. In a later write-up, Tierney said What3Words was not adequate for use in safety-critical cases.

It’s not the only downside. Critics have long argued that What3Words’ proprietary geocoding technology, which it bills as “life-saving,” makes it harder to examine it for problems or security vulnerabilities.

Concerns about its lack of openness in part led to the creation of the WhatFreeWords. A copy of the project’s website, which does not contain the code itself, said the open-source alternative was developed by reverse-engineering What3Words. “Once we found out how it worked, we coded implementations for it for JavaScript and Go,” the website said. “To ensure that we did not violate the What3Words company’s copyright, we did not include any of their code, and we only included the bare minimum data required for interoperability.”

But the project’s website was nevertheless subjected to a copyright takedown request filed by What3Words’ counsel. Even tweets that pointed to cached or backup copies of the code were removed by Twitter at the lawyers’ requests.

Toponce — a security researcher on the side — contributed to Tierney’s research, who was tweeting out his findings as he went. Toponce said that he offered to share a copy of the WhatFreeWords code with other researchers to help Tierney with his ongoing research into What3Words. Toponce told TechCrunch that receiving the legal threat may have been a combination of offering to share the code and also finding problems with What3Words.

In its letter to Toponce, What3Words argues that WhatFreeWords contains its intellectual property and that the company “cannot permit the dissemination” of the software.

Regardless, several websites still retain copies of the code and are easily searchable through Google, and TechCrunch has seen several tweets linking to the WhatFreeWords code since Toponce went public with the legal threat. Tierney, who did not use WhatFreeWords as part of his research, said in a tweet that What3Words’ reaction was “totally unreasonable given the ease with which you can find versions online.”

We asked What3Words if the company could point to a case where a judicial court has asserted that WhatFreeWords has violated its copyright. What3Words spokesperson Miriam Frank did not respond to multiple requests for comment.

TikTok to open a ‘Transparency’ Center in Europe to take content and security questions

TikTok will open a center in Europe where outside experts will be shown information on how it approaches content moderation and recommendation, as well as platform security and user privacy, it announced today.

The European Transparency and Accountability Centre (TAC) follows the opening of a U.S. center last year — and is similarly being billed as part of its “commitment to transparency”.

Soon after announcing its U.S. TAC, TikTok also created a content advisory council in the market — and went on to replicate the advisory body structure in Europe this March, with a different mix of experts.

It’s now fully replicating the U.S. approach with a dedicated European TAC.

To-date, TikTok said more than 70 experts and policymakers have taken part in a virtual U.S. tour, where they’ve been able to learn operational details and pose questions about its safety and security practices.

The short-form video social media site has faced growing scrutiny over its content policies and ownership structure in recent years, as its popularity has surged.

Concerns in the U.S. have largely centered on the risk of censorship and the security of user data, given the platform is owned by a Chinese tech giant and subject to Internet data laws defined by the Chinese Communist Party.

While, in Europe, lawmakers, regulators and civil society have been raising a broader mix of concerns — including around issues of child safety and data privacy.

In one notable development earlier this year, the Italian data protection regulator made an emergency intervention after the death of a local girl who had reportedly been taking part in a content challenge on the platform. TikTok agreed to recheck the age of all users on its platform in Italy as a result.

TikTok said the European TAC will start operating virtually, owing to the ongoing COVID-19 pandemic. But the plan is to open a physical center in Ireland — where it bases its regional HQ — in 2022.

EU lawmakers have recently proposed a swathe of updates to digital legislation that look set to dial up emphasis on the accountability of AI systems — including content recommendation engines.

A draft AI regulation presented by the Commission last week also proposes an outright ban on subliminal uses of AI technology to manipulate people’s behavior in a way that could be harmful to them or others. So content recommender engines that, for example, nudge users into harming themselves by suggestively promoting pro-suicide content or risky challenges may fall under the prohibition. (The draft law suggests fines of up to 6% of global annual turnover for breaching prohibitions.)

It’s certainly interesting to note TikTok also specifies that its European TAC will offer detailed insight into its recommendation technology.

“The Centre will provide an opportunity for experts, academics and policymakers to see first-hand the work TikTok teams put into making the platform a positive and secure experience for the TikTok community,” the company writes in a press release, adding that visiting experts will also get insights into how it uses technology “to keep TikTok’s community safe”; how trained content review teams make decisions about content based on its Community Guidelines; and “the way human reviewers supplement moderation efforts using technology to help catch potential violations of our policies”.

Another component of the EU’s draft AI regulation sets a requirement for human oversight of high risk applications of artificial intelligence. Although it’s not clear whether a social media platform would fall under that specific obligation, given the current set of categories in the draft regulation.

However the AI regulation is just one piece of the Commission’s platform-focused rule-making.

Late last year it also proposed broader updates to rules for digital services, under the DSA and DMA, which will place due diligence obligations on platforms — and also require larger platforms to explain any algorithmic rankings and hierarchies they generate. And TikTok is very likely to fall under that requirement.

The UK — which is now outside the bloc, post-Brexit — is also working on its own Online Safety regulation, due to present this year. So, in the coming years, there will be multiple content-focused regulatory regimes for platforms like TikTok to comply with in Europe. And opening algorithms to outside experts may be hard legal requirement, not soft PR.

Commenting on the launch of its European TAC in a statement, Cormac Keenan, TikTok’s head of trust and safety, said: With more than 100 million users across Europe, we recognise our responsibility to gain the trust of our community and the broader public. Our Transparency and Accountability Centre is the next step in our journey to help people better understand the teams, processes, and technology we have to help keep TikTok a place for joy, creativity, and fun. We know there’s lots more to do and we’re excited about proactively addressing the challenges that lie ahead. I’m looking forward to welcoming experts from around Europe and hearing their candid feedback on ways we can further improve our systems.”

 

The big story: iOS 14.5 brings privacy changes and more

Apple’s latest software upgrade brings a big change, Roku accuses Google of anti-competitive behavior and Brex raises a big funding round. This is your Daily Crunch for April 26, 2021.

The big story: iOS 14.5 brings privacy changes and more

Apple released the latest version of its mobile operating system today, which includes the much-discussed App Tracking Transparency feature, allowing users to control which apps are sharing their data with third parties for ad-targeting purposes.

Other new features include Watch unlocking (which could help users avoid the annoying “I can’t unlock my phone with my masked face!” phenomenon), new emojis and more.

The tech giants

Roku alleges Google is using its monopoly power in YouTube TV carriage negotiations — Roku is alerting its customers that they may lose access to the YouTube TV channel on its platform after negotiations with Google went south.

Lyft sells self-driving unit to Toyota’s Woven Planet for $550M — Under the acquisition agreement, Lyft’s so-called Level 5 division will be folded into Woven Planet Holdings.

Apple commits to 20,000 US jobs, new North Carolina campus — Apple this morning announced a sweeping plan to invest north of $430 billion over the next five years.

Startups, funding and venture capital

Brex raises $425M at a $7.4B valuation, as the corporate spend war rages on — The company has also put together a new service called Brex Premium that costs $49 per month.

Founded by Australia’s national science agency, Main Sequence launches $250M AUD deep tech fund —  Main Sequence’s second fund will look at issues including healthcare accessibility, increasing the world’s food supply, industrial productivity and space.

Mighty Networks raises $50M to build a creator economy for the masses — The company is led by Gina Bianchini, the co-founder and former CEO of Ning.

Advice and analysis from Extra Crunch

Founders who don’t properly vet VCs set up both parties for failure — Due diligence isn’t a one-way street, and founders must do their homework to make sure they’re not jumping into deals with VCs who are only paying lip service to their value-add.

How Brex more than doubled its valuation in a year — An interview with CEO Henrique Dubugras about that giant funding round.

There is no cybersecurity skills gap, but CISOs must think creatively — Netskope’s Lamont Orange doesn’t buy the idea that millions of cybersecurity jobs are going unfilled because there aren’t enough qualified candidates.

(Extra Crunch is our membership program, which helps founders and startup teams get ahead. You can sign up here.)

Everything else

How one founder partnered with NASA to make tires puncture-proof and more sustainable — This week’s episode of Found features The SMART Tire Company co-founder and CEO Earl Cole.

What the MasterClass effect means for edtech — MasterClass copycats are raising plenty of funding.

Hear about building AVs under Amazon from Zoox CTO Jesse Levinson at TC Sessions: Mobility 2021 — We’ll hear more about Zoox’s mission to develop and deploy autonomous passenger vehicles.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.

Apple’s App Tracking Transparency feature has arrived — here’s what you need to know

iOS 14.5 — the latest version of Apple’s mobile operating system — is launching today, and with it comes a much-discussed new privacy feature called App Tracking Transparency.

The feature was first announced nearly a year ago, although the company delayed the launch to give developers more time to prepare. Since then, support for the feature has already gone live in iOS and some apps have already adopted it (for example, I’ve seen tracking requests from Duolingo and Venmo), but now Apple says it will actually start enforcing the new rules.

That means iPhone owners will start seeing many more privacy prompts as they continue using their regular apps, each one asking for permission to “track your activity across other companies’ apps and websites.” Every app that requests tracking permission will also show up in a Tracking menu within your broader iOS Privacy settings, allowing you to toggle tracking on and off any time — for individual apps, or for all of them.

What does turning tracking on or off actually do? If you say no to tracking, the app will no longer be able to use Apple’s IDFA identifier to share data about your activity with data brokers and other third parties for ad-targeting purposes. It also means the app can no longer use other identifiers (like hashed email addresses) to track you, although it may be more challenging for Apple to actually enforce that part of the policy.

Apple App Tracking Transparency

Image Credits: Apple

There’s been intense debate around App Tracking Transparency in the lead up to its launch. The pro-ATT side is pretty easy to explain: There’s a tremendous amount of personal information and activity that’s being collected about consumers without their consent (as Apple outlined in a report called A Day in the Life of Your Data), and this gives us a simple way to control that sharing.

However, Facebook has argued that by dealing a serious blow to ad targeting, Apple is also hurting small businesses that depend on targeting to affordable, effective ad campaigns.

The social network even took out ads in The New York Times, The Wall Street Journal and The Washington Post declaring that it’s “standing up to Apple for small businesses everywhere.” (The Electronic Frontier Foundation dismissed the campaign as “a laughable attempt from Facebook to distract you from its poor track record of anticompetitive behavior and privacy issues as it tries to derail pro-privacy changes from Apple that are bad for Facebook’s business.”)

Others have suggested that these changes could do “existential” damage to some developers and advertisers, while also benefiting Apple’s bottom line.

The full impact will depend, in part, on how many people choose to opt out of tracking. It’s hard to imagine many normal iPhone owners saying yes when these prompts start to appear — especially since developers are not allowed to restrict any features based on who opts into or out of tracking. However mobile attribution company AppsFlyer says that early data suggests that opt-in rates could be as high as 39%.

EU’s top data protection supervisor urges ban on facial recognition in public

The European Union’s lead data protection supervisor has called for remote biometric surveillance in public places to be banned outright under incoming AI legislation.

The European Data Protection Supervisor’s (EDPS) intervention follows a proposal, put out by EU lawmakers on Wednesday, for a risk-based approach to regulating applications of artificial intelligence.

The Commission’s legislative proposal includes a partial ban on law enforcement’s use of remote biometric surveillance technologies (such as facial recognition) in public places. But the text includes wide-ranging exceptions, and digital and humans rights groups were quick to warn over loopholes they argue will lead to a drastic erosion of EU citizens’ fundamental rights. And last week a cross-party group of MEPs urged the Commission to screw its courage to the sticking place and outlaw the rights-hostile tech.

The EDPS, whose role includes issuing recommendations and guidance for the Commission, tends to agree. In a press release today Wojciech Wiewiórowski urged a rethink.

“The EDPS regrets to see that our earlier calls for a moratorium on the use of remote biometric identification systems — including facial recognition — in publicly accessible spaces have not been addressed by the Commission,” he wrote.

“The EDPS will continue to advocate for a stricter approach to automated recognition in public spaces of human features — such as of faces but also of gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals — whether these are used in a commercial or administrative context, or for law enforcement purposes.

“A stricter approach is necessary given that remote biometric identification, where AI may contribute to unprecedented developments, presents extremely high risks of deep and non-democratic intrusion into individuals’ private lives.”

Wiewiórowski had some warm words for the legislative proposal too, saying he welcomed the horizontal approach and the broad scope set out by the Commission. He also agreed there are merits to a risk-based approach to regulating applications of AI.

But the EDPB has made it clear that the red lines devised by EU lawmakers are a lot pinker in hue than he’d hoped for — adding a high profile voice to the critique that the Commission hasn’t lived up to its much trumpeted claim to have devised a framework that will ensure ‘trustworthy’ and ‘human-centric’ AI.

The coming debate over the final shape of the regulation is sure to include plenty of discussion over where exactly Europe’s AI red lines should be. A final version of the text isn’t expected to be agreed until next year at the earliest.

“The EDPS will undertake a meticulous and comprehensive analysis of the Commission’s proposal to support the EU co-legislators in strengthening the protection of individuals and society at large. In this context, the EDPS will focus in particular on setting precise boundaries for those tools and systems which may present risks for the fundamental rights to data protection and privacy,” Wiewiórowski added.

 

Proctorio sued for using DMCA to take down a student’s critical tweets

A university student is suing exam proctoring software maker Proctorio to “quash a campaign of harassment” against critics of the company, including an accusation that the company misused copyright laws to remove his tweets that were critical of the software.

The Electronic Frontier Foundation, which filed the lawsuit this week on behalf of Miami University student Erik Johnson, who also does security research on the side, accused Proctorio of having “exploited the DMCA to undermine Johnson’s commentary.”

Twitter hid three of Johnson’s tweets after Proctorio filed a copyright takedown notice under the Digital Millennium Copyright Act, or DMCA, alleging that three of Johnson’s tweets violated the company’s copyright.

Schools and universities have increasingly leaned on proctoring software during the pandemic to invigilate student exams, albeit virtually. Students must install the school’s choice of proctoring software to grant access to the student’s microphone and webcam to spot potential cheating. But students of color have complained that the software fails to recognize non-white faces and that the software also requires high-speed internet access, which many low-income houses don’t have. If a student fails these checks, the student can end up failing the exam.

Despite this, Vice reported last month that some students are easily cheating on exams that are monitored by Proctorio. Several schools have banned or discontinued using Proctorio and other proctoring software, citing privacy concerns.

Proctorio’s monitoring software is a Chrome extension, which unlike most desktop software can be easily downloaded and the source code examined for bugs and flaws. Johnson examined the code and tweeted what he found — including under what circumstances a student’s test would be terminated if the software detected signs of potential cheating, and how the software monitors for suspicious eye movements and abnormal mouse clicking.

Johnson’s tweets also contained links to snippets of the Chrome extension’s source code on Pastebin.

Proctorio claimed at the time, via its crisis communications firm Edelman, that Johnson violated the company’s rights “by copying and posting extracts from Proctorio’s software code on his Twitter account.” But Twitter reinstated Johnson’s tweets after finding Proctorio’s takedown notice “incomplete.”

“Software companies don’t get to abuse copyright law to undermine their critics,” said Cara Gagliano, a staff attorney at the EFF. “Using pieces [of] code to explain your research or support critical commentary is no different from quoting a book in a book review.”

The complaint argues that Proctorio’s “pattern of baseless DMCA notices” had a chilling effect on Johnson’s security research work, amid fears that “reporting on his findings will elicit more harassment.”

“Copyright holders should be held liable when they falsely accuse their critics of copyright infringement, especially when the goal is plainly to intimidate and undermine them,” said Gagliano. “We’re asking the court for a declaratory judgment that there is no infringement to prevent further legal threats and takedown attempts against Johnson for using code excerpts and screenshots to support his comments.”

The EFF alleges that this is part of a wider pattern that Proctorio uses to respond to criticism. Last year Olsen posted a student’s private chat logs on Reddit without their permission. Olsen later set his Twitter account to private following the incident. Proctorio is also suing Ian Linkletter, a learning technology specialist at the University of British Columbia, after posting tweets critical of the company’s proctoring software.

The lawsuit is filed in Arizona, where Proctorio is headquartered. Proctorio CEO Mike Olson did not respond to a request for comment.