As Roe v. Wade looms, should you delete your period-tracking app?

The U.S. Supreme Court is anticipated to overturn Roe v. Wade, the landmark 1973 case which guaranteed a person’s constitutional right to abortion, allowing states to decide whether to heavily regulate or ban the procedure. This reported move, based on a leaked draft of an opinion that hasn’t yet been issued but would mean abortion rights would be protected in less than half of all U.S. states, has reignited the years-long conversation about the privacy — or lack thereof — of period-tracking apps, which are used by nearly a third of women in the U.S.

Though popular, and undoubtedly a useful tool for those who want to plan and avoid pregnancy and track signs of menopause, it’s no secret that the objective of many of these apps — of which there are more than a thousand in the app stores alone — go far beyond that of tracking periods. Monitoring menstrual cycles has proven to be a lucrative business for developers, many of which share users’ personal information and activity on the apps with third-party marketers and advertisers.

With that in mind, and in light of the reported decision to overturn Roe v. Wade, some are calling for people to delete their period-tracking apps amid fears that the data they collect — and subsequently share — could be used to target and punish those seeking an abortion.

Eva Galperin, director of Cybersecurity at the Electronic Frontier Foundation, tells TechCrunch: “I do think that we are facing a future in which the data collected by period-tracking apps could be used either as a dragnet to identify women who may have had an abortion or as evidence that a woman has had an abortion in a future in which seeking out or having an abortion is criminalized, which is something anti-abortion advocates have been eager to do.”

Should you be worried?

These widely-shared fears are by no means unfounded. People who use a period-tracking app input the most intimate details about themselves, such as the dates of their periods, their weight, and the last time they had unprotected sex.

But unlike medical records held by doctors and hospitals, the information collected by these apps isn’t covered by the Health Insurance Portability and Accountability Act (HIPAA), the 1996 federal law that limits where healthcare providers can share patients’ health information. That means health app makers are mostly free to do what they want with the data they collect.

“It is abortion providers and people working in abortion support networks who are in the most immediate danger right now.” Eva Galperin, EFF

However, while in no way unreasonable to expect the worst, it’s unlikely that self-reported menstrual cycle data will fall into the hands of authorities and lead to widespread hunts for people who seek an abortion or who previously got one.

For starters, there’s the issue of accuracy. Period-tracking apps are often erroneous, and a person’s menstrual cycle is sensitive to a number of external factors such as exercise, stress, medications, and even family drama. Experts agree that information such as the day of ovulation and the fertile window can only be predicted precisely if using a marker of ovulation, such as basal body temperature or ovulation sticks, and a 2018 study found that the accuracy of prediction by menstrual cycle apps was no better than 21%.

This means that there’s a lot of reasonable doubt surrounding this data, which ultimately means it’s unlikely going to be accepted as evidence in a U.S. court setting, where “beyond a reasonable doubt” is a legal standard of proof required to validate a criminal conviction.

Last month, we also witnessed a Texas judge dismiss a murder charge against a 26-year-old woman in Texas who had her pregnancy terminated after her healthcare provider reported her self-managed abortion. Abortion was made illegal in Texas in September 2021. While there is ultimately no law in Texas that criminalizes a person trying to end their pregnancy nor is there a requirement for healthcare providers to inform authorities about self-managed abortion, anyone who performs or aids the person seeking the abortion can still face civil penalties.

“It is abortion providers and people working in abortion support networks who are in the most immediate danger right now,” Galperin says.

What should you be worried about?

Data belonging to pregnant women, as well as that belonging to women actively trying to get pregnant or not wanting to get pregnant, is immensely valuable to advertisers then use this data to strategically target them.

While it’s unlikely the sensitive data you share with your period-tracking app is going to end up in the hands of those seeking to outlaw abortion, that’s not to say these tools don’t have extensive privacy problems, including the aforementioned issue of sharing your personal data with third parties.

A 2020 Consumer Reports investigation examined five popular period tracking apps — BabyCenter, Clue, Flo, My Calendar, and Ovia — and found that they all shared user data with third parties for marketing and other purposes. Privacy International, which conducted a similar study, shared similar results; it found that the most popular apps both store and share a “dizzying” amount of data on users. It called out Flo, in particular, for retaining data on what users write in their “notes” section, such as how hard it was for them to orgasm and the medication they are taking.

Flo is a repeat offender when it comes to privacy infractions. The company, which has more than 150 million users, last year reached a settlement with the FTC over allegations it shared users’ health data with third-party app analytics and marketing services like Facebook, despite promising to keep users’ sensitive health data private. This action followed a 2019 investigation by the Wall Street Journal that found that the app had informed Facebook of in-app activity, such as when a user was having their period or had informed it of an intention to get pregnant.

In a statement given to TechCrunch, Flo spokesperson Denae Thibault said that the company does not share users’ health data with third parties. “We firmly believe women’s health data should be held with the utmost privacy and care. In fact, in March 2022 Flo completed an external, independent privacy audit which confirmed there are no gaps or weaknesses in our privacy practices,” Thibault added.

Other apps have fallen foul of similar data-sharing practices. In 2020, California reached a settlement — which included a $250,000 fine — with period-tracking app Glow after Consumer Reports found that users’ sensitive data was accessible to anyone who knew a user’s email address. The Washington Post also reported that the Android edition of fertility app Premom was sharing users’ data with three Chinese companies focused on advertising, and that pregnancy-tracking app Ovia was sharing users’ health data with their employers.

While alarming, there’s an even bigger issue: data brokers. These external third parties often have a relationship with data brokers, individuals, or organizations that collect, aggregate, and combine personal information from a variety of sources to create a digital profile on you and sell it to others. A study by the Norwegian Consumer Council examined 10 popular apps, including Clue, and found that they were collectively feeding personal information to at least 135 companies.

What’s more, even when your data is de-identified by removing identifiable information such as your name or email address, it can be combined with other information — such as your location, contacts, or unique identifiers in your phone — and traced back to you.

“Often, location data can be associated with an advertising ID, an identifier generated by the phone’s operating system that can be used by advertisers in apps and websites to uniquely identify a user online and offer services such as targeted advertising,” said Laura Lazaro Cabrera, legal officer at Privacy International. “In previous research, PI has identified targeted advertising of scientifically dubious health information as a tactic deployed by those opposing abortion.”

Should you delete your period-tracking app?

This is a difficult and complex question to answer. For some, period-tracking apps can be incredibly useful tools; not only can they empower people by helping them to learn more about their body and how it changes with their cycle, but any recorded irregularities could potentially be a sign of hormonal, thyroid, or hematologic issues.

And while the risks associated with the lax privacy of some of these apps have been highlighted in response to the reported plan to overturn Roe v. Wade, the majority of these aren’t limited to period-tracking apps. Most of the other apps installed on your smartphone also collect data — including your location — that can identify where you go, who you meet, and what you do. Thankfully, on Android and iOS, you have the ability to prevent apps — including period trackers — from collecting and sharing your data.

However, the complexity arises from the intimacy of the data users are sharing with period-tracking tools.

“Having your personal health information disseminated in ways you’re unaware of could have serious repercussions,” said Dena Mendelsohn, Consumer Report’s senior counsel on privacy and technology policy. “It could, for instance, affect your ability to obtain life insurance and how much you pay for that coverage, increase the interest rate you’re charged on loans, and even leave you vulnerable to workplace discrimination.”

It’s difficult to say whether deleting your period-tracking app is the best course of action, particularly as in some cases it’s unclear whether all of your historical data will be wiped from these developers’ servers as a result.

“Deleting the app does not guarantee that one’s data will be automatically erased — in reality, the opposite is likely to be true,” Cabrera said. “Generally, unless the user expressly requests for their data to be deleted, the app will commonly retain the data for a period of time after deletion of the app.”

Clue, Flo and Ovia all confirmed to TechCrunch that all user data is deleted should a user delete their account.

EFF’s Galperin recommends that for those seeking abortions, assistance with their abortions or information about abortions, a heightened sense of operational security — especially in regards to apps, search engines and location-tracking — should be the number one priority.

“I’ve spent most of my career helping to protect activists and journalists in authoritarian countries, where it is often wise to think several steps ahead about your digital privacy and security practices,” she tells TechCrunch. “Unfortunately, I think it is also time to bring this mindset to abortion services and the people who are seeking them out.”

It’s also worth familiarizing yourself with the privacy practices of your period-tracking app of choice, many of which have seemingly improved in light of the multiple investigations that probed the often careless data-sharing practices of these services.

For those who live in states where police can compel you to biometrically unlock your devices — either using your fingerprint or face scan — you might want to turn off the biometric unlock feature on your phone and instead use a PIN code, which you cannot be forced to disclose. Police often use mobile forensic tools to download the contents of a person’s phone, which can be used as evidence against them.

“Ovia operates a comprehensive privacy and security program to protect user privacy, as described in our privacy policy,” Ovia spokesperson Cassandre Charles told TechCrunch. “We allow users to delete their data, to obtain copies of their data, and to correct their data. We only share data in limited circumstances as described in our privacy policy,” such as with the companies it uses to host customer data and provide customer support.

Clue, which ranked as the most privacy-conscious app in Consumer Reports’ study, told TechCrunch: “We have received messages from users concerned about how their data could be used by U.S. courts if Roe v. Wade is overturned. We completely understand this anxiety, and we want to reassure you that your health data, particularly any data you track in Clue about pregnancies, pregnancy loss, or abortion, is kept private and safe.”

Period-tracking apps should be an empowering experience and not one where user data is exploited. In light of recent events, we recommend that people using these apps extend that empowerment to their personal security; choose to use apps that allow people to use them without creating an account, change the privacy settings to restrict data access, and be aware of the developers’ privacy policies.

Ultimately, however, if you’re worried about your data, privacy, or the potential risks associated with using a period-tracker, it might be time to delete your app and rely on pen and paper instead.

Read more on TechCrunch:

Security flaws found in popular EV chargers

U.K. cybersecurity company Pen Test Partners has identified several vulnerabilities in the APIs of six home electric vehicle charging brands and a large public EV charging network. While the charger manufacturers resolved most of the issues, the findings are the latest example of the poorly regulated world of Internet of Things devices, which are poised to become all but ubiquitous in our homes and vehicles.

Vulnerabilities were identified in the API of six different EV charging brands — Project EV, Wallbox, EVBox, EO Charging’s EO Hub and EO mini pro 2, Rolec and Hypervolt — and public charging network Chargepoint. Security researcher Vangelis Stykas identified several security flaws among the various brands that could have allowed a malicious hacker to hijack user accounts, impede charging and even turn one of the chargers into a “backdoor” into the owner’s home network.

The consequences of a hack to a public charging station network could include theft of electricity at the expense of driver accounts and turning chargers on or off.

A Raspberry Pi in a Wallbox charger. Image Credits: Pen Test Partners (opens in a new window

Some EV chargers used a Raspberry Pi compute module, a low-cost computer that’s often used by hobbyists and programmers.

“The Pi is a great hobbyist and educational computing platform, but in our opinion it’s not suitable for commercial applications as it doesn’t have what’s known as a ‘secure bootloader,’” Pen Test Partners founder Ken Munro told TechCrunch. “This means anyone with physical access to the outside of your home (hence to your charger) could open it up and steal your Wi-Fi credentials. Yes, the risk is low, but I don’t think charger vendors should be exposing us to additional risk.”

The hacks are “really fairly simple,” Munro said. “I can teach you to do this in five minutes,” he added.

The company’s report, published this past weekend, touched on vulnerabilities associated with emerging protocols like the Open Charge Point Interface, maintained and managed by the EVRoaming Foundation. The protocol was designed to make charging seamless between different charging networks and operators.

Munro likened it to roaming on a cell phone, allowing drivers to use networks outside of their usual charging network. OCPI isn’t widely used at the moment, so these vulnerabilities could be designed out of the protocol. But if left unaddressed, it could mean “that a vulnerability in one platform potentially creates a vulnerability in another,” Stykas explained.

Hacks to charging stations have become a particularly nefarious threat as a greater share of transportation becomes electrified and more power flows through the electric grid. Electric grids are not designed for large swings in power consumption — but that’s exactly what could happen, should there be a large hack that turned on or off a sufficient number of DC fast chargers.

“It doesn’t take that much to trip the power grid to overload,” Munro said. “We’ve inadvertently made a cyberweapon that others could use against us.”

The “Wild West” of cybersecurity

While the effects on the electric grid are unique to EV chargers, cybersecurity issues aren’t. The routine hacks reveal more endemic issues in IoT devices, where being first to market often takes precedence over sound security — and where regulators are barely able to catch up to the pace of innovation.

“There’s really not a lot of enforcement,” Justin Brookman, the director of consumer privacy and technology policy for Consumer Reports, told TechCrunch in a recent interview. Data security enforcement in the United States falls within the purview of the Federal Trade Commission. But while there is a general-purpose consumer protection statute on the books, “it may well be illegal to build a system that has poor security, it’s just whether you’re going to get enforced against or not,” said Brookman.

A separate federal bill, the Internet of Things Cybersecurity Improvement Act, passed last September but only broadly applies to the federal government.

There’s only slightly more movement on the state level. In 2018, California passed a bill banning default passwords in new consumer electronics starting in 2020 — useful progress to be sure, but which largely puts the burden of data security in the hands of consumers. California, as well as states like Colorado and Virginia, also have passed laws requiring reasonable security measures for IoT devices.

Such laws are a good start. But (for better or worse) the FTC isn’t like the U.S. Food and Drug Administration, which audits consumer products before they hit the market. As of now, there’s no security check on technology devices prior to them reaching consumers. Over in the United Kingdom, “it’s the Wild West over here as well, right now,” Munro said.

Some startups have emerged that are trying to tackle this issue. One is Thistle Technologies, which is trying to help IoT device manufacturers integrate mechanisms into their software to receive security updates. But it’s unlikely this problem will be fully solved on the back of private industry alone.

Because EV chargers could pose a unique threat to the electric grid, there’s a possibility that EV chargers could fall under the scope of a critical infrastructure bill. Last week, President Joe Biden released a memorandum calling for greater cybersecurity for systems related to critical infrastructure. “The degradation, destruction or malfunction of systems that control this infrastructure could cause significant harm to the national and economic security of the United States,” Biden said. Whether this will trickle down to consumer products is another question.

WebKit’s new anti-tracking policy puts privacy on a par with security

WebKit, the open source engine that underpins Internet browsers including Apple’s Safari browser, has announced a new tracking prevention policy that takes the strictest line yet on the background and cross-site tracking practices and technologies which are used to creep on Internet users as they go about their business online.

Trackers are technologies that are invisible to the average web user, yet which are designed to keep tabs on where they go and what they look at online — typically for ad targeting but web user profiling can have much broader implications than just creepy ads, potentially impacting the services people can access or the prices they see, and so on. Trackers can also be a conduit for hackers to inject actual malware, not just adtech.

This translates to stuff like tracking pixels; browser and device fingerprinting; and navigational tracking to name just a few of the myriad methods that have sprouted like weeds from an unregulated digital adtech industry that’s poured vast resource into ‘innovations’ intended to strip web users of their privacy.

WebKit’s new policy is essentially saying enough: Stop the creeping.

But — and here’s the shift — it’s also saying it’s going to treat attempts to circumvent its policy as akin to malicious hack attacks to be responded to in kind; i.e. with privacy patches and fresh technical measures to prevent tracking.

“WebKit will do its best to prevent all covert tracking, and all cross-site tracking (even when it’s not covert),” the organization writes (emphasis its), adding that these goals will apply to all types of tracking listed in the policy — as well as “tracking techniques currently unknown to us”.

“If we discover additional tracking techniques, we may expand this policy to include the new techniques and we may implement technical measures to prevent those techniques,” it adds.

“We will review WebKit patches in accordance with this policy. We will review new and existing web standards in light of this policy. And we will create new web technologies to re-enable specific non-harmful practices without reintroducing tracking capabilities.”

Spelling out its approach to circumvention, it states in no uncertain terms: “We treat circumvention of shipping anti-tracking measures with the same seriousness as exploitation of security vulnerabilities,” adding: “If a party attempts to circumvent our tracking prevention methods, we may add additional restrictions without prior notice. These restrictions may apply universally; to algorithmically classified targets; or to specific parties engaging in circumvention.”

It also says that if a certain tracking technique cannot be completely prevented without causing knock-on effects with webpage functions the user does intend to interact with, it will “limit the capability” of using the technique” — giving examples such as “limiting the time window for tracking” and “reducing the available bits of entropy” (i.e. limiting how many unique data points are available to be used to identify a user or their behavior).

If even that’s not possible “without undue user harm” it says it will “ask for the user’s informed consent to potential tracking”.

“We consider certain user actions, such as logging in to multiple first party websites or apps using the same account, to be implied consent to identifying the user as having the same identity in these multiple places. However, such logins should require a user action and be noticeable by the user, not be invisible or hidden,” it further warns.

WebKit credits Mozilla’s anti-tracking policy as inspiring and underpinning its new approach.

Commenting on the new policy, Dr Lukasz Olejnik, an independent cybersecurity advisor and research associate at the Center for Technology and Global Affairs Oxford University, says it marks a milestone in the evolution of how user privacy is treated in the browser — setting it on the same footing as security.

“Treating privacy protection circumventions on par with security exploitation is a first of its kind and unprecedented move,” he tells TechCrunch. “This sends a clear warning to the potential abusers but also to the users… This is much more valuable than the still typical approach of ‘we treat the privacy of our users very seriously’ that some still think is enough when it comes to user expectation.”

Asked how he sees the policy impacting pervasive tracking, Olejnik does not predict an instant, overnight purge of unethical tracking of users of WebKit-based browsers but argues there will be less room for consent-less data-grabbers to manoeuvre.

“Some level of tracking, including with unethical technologies, will probably remain in use for the time being. But covert tracking is less and less tolerated,” he says. “It’s also interesting if any decisions will follow, such as for example the expansion of bug bounties to reported privacy vulnerabilities.”

“How this policy will be enforced in practice will be carefully observed,” he adds.

As you’d expect, he credits not just regulation but the role played by active privacy researchers in helping to draw attention and change attitudes towards privacy protection — and thus to drive change in the industry.

There’s certainly no doubt that privacy research is a vital ingredient for regulation to function in such a complex area — feeding complaints that trigger scrutiny that can in turn unlock enforcement and force a change of practice.

Although that’s also a process that takes time.

“The quality of cybersecurity and privacy technology policy, including its communication still leave much to desire, at least at most organisations. This will not change fast,” says says Olejnik. “Even if privacy is treated at the ‘C-level’, this then still tends to be about the purely risk of compliance. Fortunately, some important industry players with good understanding of both technology policy and the actual technology, even the emerging ones still under active research, treat it increasingly seriously.

“We owe it to the natural flow of the privacy research output, the talent inflows, and the slowly moving strategic shifts as well to a minor degree to the regulatory pressure and public heat. This process is naturally slow and we are far from the end.”

For its part, WebKit has been taking aim at trackers for several years now, adding features intended to reduce pervasive tracking — such as, back in 2017, Intelligent Tracking Prevention (ITP), which uses machine learning to squeeze cross-site tracking by putting more limits on cookies and other website data.

Apple immediately applied ITP to its desktop Safari browser — drawing predictable fast-fire from the Internet Advertising Bureau whose membership is comprised of every type of tracker deploying entity on the Internet.

But it’s the creepy trackers that are looking increasingly out of step with public opinion. And, indeed, with the direction of travel of the industry.

In Europe, regulation can be credited with actively steering developments too — following last year’s application of a major update to the region’s comprehensive privacy framework (which finally brought the threat of enforcement that actually bites). The General Data Protection Regulation (GDPR) has also increased transparency around security breaches and data practices. And, as always, sunlight disinfects.

Although there remains the issue of abuse of consent for EU regulators to tackle — with research suggesting many regional cookie consent pop-ups currently offer users no meaningful privacy choices despite GDPR requiring consent to be specific, informed and freely given.

It also remains to be seen how the adtech industry will respond to background tracking being squeezed at the browser level. Continued aggressive lobbying to try to water down privacy protections seems inevitable — if ultimately futile. And perhaps, in Europe in the short term, there will be attempts by the adtech industry to funnel more tracking via cookie ‘consent’ notices that nudge or force users to accept.

As the security space underlines, humans are always the weakest link. So privacy-hostile social engineering might be the easiest way for adtech interests to keep overriding user agency and grabbing their data anyway. Stopping that will likely need regulators to step in and intervene.

Another question thrown up by WebKit’s new policy is which way Chromium will jump, aka the browser engine that underpins Google’s hugely popular Chrome browser.

Of course Google is an ad giant, and parent company Alphabet still makes the vast majority of its revenue from digital advertising — so it maintains a massive interest in tracking Internet users to serve targeted ads.

Yet Chromium developers did pay early attention to the problem of unethical tracking. Here, for example, are two discussing potential future work to combat tracking techniques designed to override privacy settings in a blog post from nearly five years ago.

There have also been much more recent signs Google paying attention to Chrome users’ privacy, such as changes to how it handles cookies which it announced earlier this year.

But with WebKit now raising the stakes — by treating privacy as seriously as security — that puts pressure on Google to respond in kind. Or risk being seen as using its grip on browser marketshare to foot-drag on baked in privacy standards, rather than proactively working to prevent Internet users from being creeped on.