Maryland and Montana are restricting police access to DNA databases

Maryland and Montana have become the first U.S. states to pass laws that make it tougher for law enforcement to access DNA databases.

The new laws, which aim to safeguard the genetic privacy of millions of Americans, focus on consumer DNA databases, such as 23andMe, Ancestry, GEDmatch and FamilyTreeDNA, all of which let people upload their genetic information and use it to connect with distant relatives and trace their family tree. While popular — 23andMe has more than three million users, and GEDmatch more than one million — many are unaware that some of these platforms share genetic data with third parties, from the pharmaceutical industry and scientists to law enforcement agencies.

When used by law enforcement through a technique known as forensic genetic genealogy searching (FGGS), officers can upload DNA evidence found at a crime scene to make connections on possible suspects, the most famous example being the identification of the Golden State Killer in 2018. This saw investigators upload a DNA sample taken at the time of a 1980 murder linked to the serial killer into GEDmatch and subsequently identify distant relatives of the suspect — a critical breakthrough that led to the arrest of Joseph James DeAngelo.

While law enforcement agencies have seen success in using consumer DNA databases to aid with criminal investigations, privacy advocates have long warned of the dangers of these platforms. Not only can these DNA profiles help trace distant ancestors, but the vast troves of genetic data they hold can divulge a person’s propensity for various diseases, predict addiction and drug response, and even be used by companies to create images of what they think a person looks like.

Ancestry and 23andMe have kept their genetic databases closed to law enforcement without a warrant, GEDmatch (which was acquired by a crime scene DNA company in December 2019) and FamilyTreeDNA have previously shared their database with investigators. 

To ensure the genetic privacy of the accused and their relatives, Maryland will, starting October 1, require law enforcement to get a judge’s sign-off before using genetic genealogy, and will limit its use to serious crimes like murder, kidnapping, and human trafficking. It also says that investigators can only use databases that explicitly tell users that their information could be used to investigate crimes. 

In Montana, where the new rules are somewhat narrower, law enforcement would need a warrant before using a DNA database unless the users waived their rights to privacy.

The laws “demonstrate that people across the political spectrum find law enforcement use of consumer genetic data chilling, concerning and privacy-invasive,” said Natalie Ram, a law professor at the University of Maryland. “I hope to see more states embrace robust regulation of this law enforcement technique in the future.”

The introduction of these laws has also been roundly welcomed by privacy advocates, including the Electronic Frontier Foundation. Jennifer Lynch, surveillance litigation director at the EFF, described the restrictions as a “step in the right direction,” but called for more states — and the federal government — to crack down further on FGGS.

“Our genetic data is too sensitive and important to leave it up to the whims of private companies to protect it and the unbridled discretion of law enforcement to search it,” Lynch said.

“Companies like GEDmatch and FamilyTreeDNA have allowed and even encouraged law enforcement searches. Because of this, law enforcement officers are increasingly accessing these databases in criminal investigations across the country.”

A spokesperson for 23andMe told TechCrunch: “We fully support legislation that provides consumers with stronger privacy protections. In fact we are working on legislation in a number of states to increase consumer genetic privacy protections. Customer privacy and transparency are core principles that guide 23andMe’s approach to responding to legal requests and maintaining customer trust. We closely scrutinize all law enforcement and regulatory requests and we will only comply with court orders, subpoenas, search warrants or other requests that we determine are legally valid. To date we have not released any customer information to law enforcement.”

GEDmatch and FamilyTreeDNA, both of which opt users into law enforcement searches by default, told the New York Times that they have no plans to change their existing policies around user consent in response to the new regulation. 

Ancestry did not immediately comment.

Read more:

Proctorio sued for using DMCA to take down a student’s critical tweets

A university student is suing exam proctoring software maker Proctorio to “quash a campaign of harassment” against critics of the company, including an accusation that the company misused copyright laws to remove his tweets that were critical of the software.

The Electronic Frontier Foundation, which filed the lawsuit this week on behalf of Miami University student Erik Johnson, who also does security research on the side, accused Proctorio of having “exploited the DMCA to undermine Johnson’s commentary.”

Twitter hid three of Johnson’s tweets after Proctorio filed a copyright takedown notice under the Digital Millennium Copyright Act, or DMCA, alleging that three of Johnson’s tweets violated the company’s copyright.

Schools and universities have increasingly leaned on proctoring software during the pandemic to invigilate student exams, albeit virtually. Students must install the school’s choice of proctoring software to grant access to the student’s microphone and webcam to spot potential cheating. But students of color have complained that the software fails to recognize non-white faces and that the software also requires high-speed internet access, which many low-income houses don’t have. If a student fails these checks, the student can end up failing the exam.

Despite this, Vice reported last month that some students are easily cheating on exams that are monitored by Proctorio. Several schools have banned or discontinued using Proctorio and other proctoring software, citing privacy concerns.

Proctorio’s monitoring software is a Chrome extension, which unlike most desktop software can be easily downloaded and the source code examined for bugs and flaws. Johnson examined the code and tweeted what he found — including under what circumstances a student’s test would be terminated if the software detected signs of potential cheating, and how the software monitors for suspicious eye movements and abnormal mouse clicking.

Johnson’s tweets also contained links to snippets of the Chrome extension’s source code on Pastebin.

Proctorio claimed at the time, via its crisis communications firm Edelman, that Johnson violated the company’s rights “by copying and posting extracts from Proctorio’s software code on his Twitter account.” But Twitter reinstated Johnson’s tweets after finding Proctorio’s takedown notice “incomplete.”

“Software companies don’t get to abuse copyright law to undermine their critics,” said Cara Gagliano, a staff attorney at the EFF. “Using pieces [of] code to explain your research or support critical commentary is no different from quoting a book in a book review.”

The complaint argues that Proctorio’s “pattern of baseless DMCA notices” had a chilling effect on Johnson’s security research work, amid fears that “reporting on his findings will elicit more harassment.”

“Copyright holders should be held liable when they falsely accuse their critics of copyright infringement, especially when the goal is plainly to intimidate and undermine them,” said Gagliano. “We’re asking the court for a declaratory judgment that there is no infringement to prevent further legal threats and takedown attempts against Johnson for using code excerpts and screenshots to support his comments.”

The EFF alleges that this is part of a wider pattern that Proctorio uses to respond to criticism. Last year Olsen posted a student’s private chat logs on Reddit without their permission. Olsen later set his Twitter account to private following the incident. Proctorio is also suing Ian Linkletter, a learning technology specialist at the University of British Columbia, after posting tweets critical of the company’s proctoring software.

The lawsuit is filed in Arizona, where Proctorio is headquartered. Proctorio CEO Mike Olson did not respond to a request for comment.

These 6 browser extensions will protect your privacy online

The internet is not a private place. Ads try to learn as much about you to sell your information to the highest bidder. Emails know when you open them and which links you click. And some of the biggest internet snoops, like Facebook and Amazon, follow you from site to site as you browse the web.

But it doesn’t have to be like that. We’ve tried and tested six browser extensions that will immediately improve your privacy online by blocking most of the invisible ads and trackers.

These extensions won’t block every kind of snooping, but they will vastly reduce your exposure to most of the efforts to track your internet activity. You might not care that advertisers collect your data to learn your tastes and interests to serve you targeted ads. But you might care that these ad giants can see which medical conditions you’re looking up and what private purchases you’re making.

By blocking these hidden trackers from loading, websites can’t collect as much information about you. Plus by dropping the unnecessary bulk, some websites will load faster. The tradeoff is that some websites might not load properly or refuse to let you in if you don’t let them track you. You can toggle the extensions on and off as needed, or you could ask yourself if the website was that good to begin with and could you not just find what you were looking for somewhere else?

HTTPS Everywhere

We’re pretty much hardwired to look for that little green lock in our browser to tell us a website was loaded over an HTTPS-encrypted connection. That means the websites you open haven’t been hijacked or modified by an attacker before it loaded and that anything you submit to that website can’t be seen by anyone other than the website. HTTPS Everywhere is a browser extension made by the non-profit internet group the Electronic Frontier Foundation that automatically loads websites over HTTPS where it’s offered, and allows you to block the minority of websites that don’t support HTTPS. The extension is supported by most browsers, including Chrome, Firefox, Edge, and Opera.

Privacy Badger

Another extension developed by the EFF, Privacy Badger is one of the best all-in-one extensions for blocking invisible third-party trackers on websites. This extension looks at all the components of a web page and learns which ones track you from website to website, and then blocks them from loading in the browser. Privacy Badger also learns as you travel the web, so it gets better over time. And it requires no effort or configuration to work, just install it and leave it to it. The extension is available on most major browsers.

uBlock Origin

Ads are what keeps the internet free, but often at the expense of your personal information. Ads try to learn as much about you — usually by watching your browsing activity and following you across the web — so that they can target you with ads you’re more likely to click on. Ad blockers stop them in their tracks by blocking ads from loading, but also the tracking code that comes with it.

uBlock Origin is a lightweight, simple but effective, and widely trusted ad blocker used by millions of people, but it also has a ton of granularity and customizability for the more advanced user. (Be careful with impersonators: there are plenty of ad blockers that aren’t as trusted that use a similar name.) And if you feel bad about the sites that rely on ads for revenue (including us!), consider a subscription to the site instead. After all, a free web that relies on ad tracking to make money is what got us into this privacy nightmare to begin with.

uBlock Origin works in Chrome, Firefox, and Edge and the extension is open source so anyone can look at how it works.

PixelBlock & ClearURLs

If you thought hidden trackers in websites were bad, wait until you learn about what’s lurking in your emails. Most emails from brand names come with tiny, often invisible pixels that alerts the sender when you’ve opened them. PixelBlock is a simple extension for Chrome browsers that simply blocks these hidden email open trackers from loading and working. Every time it detects a tracker, it displays a small red eye in your inbox so you know.

Most of these same emails also come with tracking links that alerts the sender which links you click. ClearURLs, available for Chrome, Firefox and Edge, sits in your browser and silently removes the tracking junk from every link in your browser and your inbox. That means ClearURLs needs more access to your browser’s data than most of these extensions, but its makers explain why in the documentation.

Firefox Multi-Account Containers

And an honorary mention for Firefox users, who can take advantage of Multi-Account Containers, built by the browser maker itself to help you isolate your browsing activity. That means you can have one container full of your work tabs in your browser, and another container with all of your personal tabs, saving you from having to use multiple browsers. Containers also keep your private personal browsing separate from your work browsing activity. It also means you can put sites like Facebook or Google in a container, making it far more difficult for them to see which websites you visit and understand your tastes and interests. Containers are easy to use and customizable.

Decrypted: The block clock tick-tocks on TikTok

In less than three months and notwithstanding intervention, TikTok will be effectively banned in the U.S. unless an American company steps in to save it, after the Trump administration declared by executive order this week that the Chinese-built video sharing app is a threat to national security.

How much of a threat TikTok poses exactly remains to be seen. U.S. officials are convinced that the app could be compelled by Beijing to vacuum up reams of Westerners’ data for intelligence. Or is the app, beloved by millions of young American voters, simply a pawn in the Trump administration’s long political standoff with China?

Really, the answer is a bit of both — even if on paper TikTok is no worse than the homegrown threat to privacy posed by the Big Tech behemoths: Facebook, Instagram, Twitter and Google . But the foreign threat from Beijing alone was enough that the Trump administration needed to crack down on the app — and the videos frequently critical of the administration’s actions.

For its part, TikTok says it will fight back against the Trump administration’s action.

This week’s Decrypted looks at TikTok amid its looming ban. We’ll look at why the ban is unlikely, even if privacy and security issues persist.


THE BIG PICTURE

Internet watchdog says a TikTok ban is a ‘seed of genuine security concern wrapped in a thick layer of censorship’

The verdict from the Electronic Frontier Foundation is clear: The U.S. can’t ban TikTok without violating the First Amendment. Banning the app would be a huge abridgment of freedom of speech, whether it’s forbidding the app stores from serving it or blocking it at the network level.

But there are still legitimate security and privacy concerns. The big issue for U.S. authorities is that the app’s parent company, ByteDance, has staff in China and is subject to Beijing’s rules.

A new technique can detect newer 4G ‘stingray’ cell phone snooping

Security researchers say they have developed a new technique to detect modern cell-site simulators.

Cell site simulators, known as “stingrays,” impersonate cell towers and can capture information about any phone in its range — including in some cases calls, messages and data. Police secretly deploy stingrays hundreds of times a year across the United States, often capturing the data on innocent bystanders in the process.

Little is known about stingrays, because they are deliberately shrouded in secrecy. Developed by Harris Corp. and sold exclusively to police and law enforcement, stingrays are covered under strict nondisclosure agreements that prevent police from discussing how the technology works. But what we do know is that stingrays exploit flaws in the way that cell phones connect to 2G cell networks.

Most of those flaws are fixed in the newer, faster and more secure 4G networks, though not all. Newer cell site simulators, called “Hailstorm” devices, take advantage of similar flaws in 4G that let police snoop on newer phones and devices.

Some phone apps claim they can detect stingrays and other cell site simulators, but most produce wrong results.

But now researchers at the Electronic Frontier Foundation have discovered a new technique that can detect Hailstorm devices.

Enter the EFF’s latest project, dubbed “Crocodile Hunter” — named after Australian nature conservationist Steve Irwin who was killed by a stingray’s barb in 2006 — helps detect cell site simulators and decodes nearby 4G signals to determine if a cell tower is legitimate or not.

Every time your phone connects to the 4G network, it runs through a checklist — known as a handshake — to make sure that the phone is allowed to connect to the network. It does this by exchanging a series of unencrypted messages with the cell tower, including unique details about the user’s phone — such as its IMSI number and its approximate location. These messages, known as the master information block (MIB) and the system information block (SIB), are broadcast by the cell tower to help the phone connect to the network.

“This is where the heart of all of the vulnerabilities lie in 4G,” said Cooper Quintin, a senior staff technologist at the EFF, who headed the research.

Quintin and fellow researcher Yomna Nasser, who authored the EFF’s technical paper on how cell site simulators work, found that collecting and decoding the MIB and SIB messages over the air can identify potentially illegitimate cell towers.

This became the foundation of the Crocodile Hunter project.

A rare public photo of a stingray, manufactured by Harris Corp. Image Credits: U.S. Patent and Trademark Office

Crocodile Hunter is open-source, allowing anyone to run it, but it requires a stack of both hardware and software to work. Once up and running, Crocodile Hunter scans for 4G cellular signals, begins decoding the tower data, and uses trilateration to visualize the towers on a map.

But the system does require some thought and human input to find anomalies that could identify a real cell site simulator. Those anomalies can look like cell towers appearing out of nowhere, towers that appear to move or don’t match known mappings of existing towers, or are broadcasting MIB and SIB messages that don’t seem to make sense.

That’s why verification is important, Quintin said, and stingray-detecting apps don’t do this.

“Just because we find an anomaly, doesn’t mean we found the cell site simulator. We actually need to go verify,” he said.

In one test, Quintin traced a suspicious-looking cell tower to a truck outside a conference center in San Francisco. It turned out to be a legitimate mobile cell tower, contracted to expand the cell capacity for a tech conference inside. “Cells on wheels are pretty common,” said Quintin. “But they have some interesting similarities to cell site simulators, namely in that they are a portable cell that isn’t usually there and suddenly it is, and then leaves.”

In another test carried out earlier this year at the ShmooCon security conference in Washington, D.C. where cell site simulators have been found before, Quintin found two suspicious cell towers using Crocodile Hunter: One tower that was broadcasting a mobile network identifier associated with a Bermuda cell network and another tower that didn’t appear to be associated with a cell network at all. Neither made much sense, given Washington, D.C. is nowhere near Bermuda.

Quintin said that the project was aimed at helping to detect cell site simulators, but conceded that police will continue to use cell site simulators for as long as the cell networks are vulnerable to their use, an effort that could take years to fix.

Instead, Quintin said that the phone makers could do more at the device level to prevent attacks by allowing users to switch off access to legacy 2G networks, effectively allowing users to opt-out of legacy stingray attacks. Meanwhile, cell networks and industry groups should work to fix the vulnerabilities that Hailstorm devices exploit.

“None of these solutions are going to be foolproof,” said Quintin. “But we’re not even doing the bare minimum yet.”


Send tips securely over Signal and WhatsApp to +1 646-755-8849 or send an encrypted email to: zack.whittaker@protonmail.com

Decrypted: The tech police use against the public

There is a darker side to cybersecurity that’s frequently overlooked.

Just as you have an entire industry of people working to keep systems and networks safe from threats, commercial adversaries are working to exploit them. We’re not talking about red-teamers, who work to ethically hack companies from within. We’re referring to exploit markets that sell details of security vulnerabilities and the commercial spyware companies that use those exploits to help governments and hackers spy on their targets.

These for-profit surveillance companies flew under the radar for years, but have only recently gained notoriety. But now, they’re getting unwanted attention from U.S. lawmakers.

In this week’s Decrypted, we look at the technologies police use against the public.


THE BIG PICTURE

Secrecy over protest surveillance prompts call for transparency

Last week we looked at how the Justice Department granted the Drug Enforcement Administration new powers to covertly spy on protesters. But that leaves a big question: What kind of surveillance do federal agencies have, and what happens to people’s data once it is collected?

While some surveillance is noticeable — from overhead drones and police helicopters overhead — others are worried that law enforcement are using less than obvious technologies, like facial recognition and access to phone records, CNBC reports. Many police departments around the U.S. also use “stingray” devices that spoof cell towers to trick cell phones into turning over their call, message and location data.

Instagram will now warn you before your account gets deleted, offer in-app appeals

Instagram this morning announced several changes to its moderation policy, the most significant of which is that it will now warn users if their account could become disabled before that actually takes place. This change goes to address a longstanding issue where users would launch Instagram only to find that their account had been shut down without any warning.

While it’s one thing for Instagram to disable accounts for violating its stated guidelines, the service’s automated systems haven’t always gotten things right. The company has come under fire before for banning innocuous photos, like those of mothers breastfeeding their children, for example, or art. (Or, you know, Madonna.)

Now the company says it will introduce a new notification process that will warn users if their account is at risk of becoming disabled. The notification will also allow them to appeal the deleted content in some cases.

For now, users will be able to appeal moderation decisions around Instagram’s nudity and pornography policies, as well as its bullying and harassment, hate speech, drug sales, and counter-terrorism policies. Over time, Instagram will expand the appeal capabilities to more categories.

The change means users won’t be caught off guard by Instagram’s enforcement actions. Plus, they’ll be given a chance to appeal a decision directly in the app, instead of only through the Help Center as before.

Disable Thresholds 2 up EN

In addition, Instagram says it will increase its enforcement of bad actors.

Previously, it could remove accounts that had a certain percentage of content in violation of its policies. But now it will also be able to remove accounts that have a certain number of violations within a window of time.

“Similarly to how policies are enforced on Facebook, this change will allow us to enforce our policies more consistently and hold people accountable for what they post on Instagram,” the company says in its announcement.

The changes follow a recent threat of a class-action lawsuit against the photo-sharing network led by the Adult Performers Actors Guild. The organization claimed Instagram was banning the adult performers’ accounts, even when there was no nudity being shown.

“It appears that the accounts were terminated merely because of their status as an adult performer,” James Felton, the Adult Performers Actors Guild legal counsel, told the Guardian in June. “Efforts to learn the reasons behind the termination have been futile,” he said, adding that the Guild was considering legal action.

The Electronic Frontier Foundation (EFF) also this year launched an anti-censorship campaign, TOSSed Out, which aimed to highlight how social media companies unevenly enforce their terms of service. As part of its efforts, the EFF examined the content moderation policies of 16 platforms and app stores, including Facebook, Twitter, the Apple App Store, and Instagram.

It found that only four companies—Facebook, Reddit, Apple, and GitHub—had committed to actually informing users when their content was censored what community guideline violation or legal request had led to that action.

“Providing an appeals process is great for users, but its utility is undermined by the fact that users can’t count on companies to tell them when or why their content is taken down,” said Gennie Gebhart, EFF associate director of research, at the time of the report. “Notifying people when their content has been removed or censored is a challenge when your users number in the millions or billions, but social media platforms should be making investments to provide meaningful notice.”

Instagram’s policy change focused on cracking down on repeat offenders is rolling out now, while the ability to appeal decisions directly within the app will arrive in the coming months.

Why carriers keep your data longer

Your wireless carrier knows where you are as you read this on your phone—otherwise, it couldn’t connect your phone in the first place.

But your wireless carrier also has a memory. It knows where you took your phone in the last hour, the last week, the last month, the last year—and maybe even the last five years.

That gives it an enormous warehouse of data on your whereabouts that can help your wireless carrier fix coverage gaps while revealing much more. Depending on the density of cell sites around you at any one point, the location data triangulated from them can not only highlight your home and office but point to the bars you frequented, the houses at which you spent the night, and the offices of therapists you visited.

EFF lawyer joins WhatsApp as privacy policy manager

In an effort to bolster its public credibility in the wake of a very rough year, Facebook is bringing a fierce former critic into the fold.

Next month, longtime Electronic Frontier Foundation (EFF) counsel Nate Cardozo will join WhatsApp, Facebook’s encrypted chat app. Cardozo most recently held the position of Senior Information Security Counsel with the EFF where he worked closely with the organization on cybersecurity policy. As his bio there reads, Cardozo is “an expert in technology law and civil liberties” and already works with private companies on privacy policies that protect user rights.

Cardozo announced the move in a post to Facebook on Tuesday.

“Personal news!

After six and a half years at the Electronic Frontier Foundation (EFF), I’ll be leaving at the end of next week. I’m incredibly sad to be leaving such a great organization and I’ll miss my colleagues with all my heart.

Where to? Starting 2/19, I’ll be the Privacy Policy Manager for WhatsApp!! I could NOT be more excited.

If you know me at all, you’ll know this isn’t a move I’d make lightly. After the privacy beating Facebook’s taken over the last year, I was skeptical too. But the privacy team I’ll be joining knows me well, and knows exactly how I feel about tech policy, privacy, and encrypted messaging. And that’s who they want at managing privacy at WhatsApp. I couldn’t pass up that opportunity.

It’s going to be an enormous challenge professionally but I’m ready for it.”

Though it also does more cooperative work with major tech companies, the EFF frequently finds itself on the opposite side of the ring. Cardozo’s own background reflects that adversarial relationship, and he certainly hasn’t minced words about his new employer. In a 2015 op-ed, Cardozo hit the nail on the head about Facebook’s lucrative habit of tracking its users’ every move.

“It’s creepy, but maybe you don’t care enough about a faceless corporation’s data mining to go out of your way to protect your privacy, and anyway you don’t have anything to hide,” Cardozo wrote. “Facebook counts on that; its business model depends on our collective confusion and apathy about privacy.”

Personally, we’d sleep ever so slightly better at night knowing that the guy who wrote the sentence “If a business model depends on deception and apathy, it deserves to fail” is trying a turn on the inside.

The cognitive dissonance of a well-regarded privacy advocate moving over to Facebook is notable, though not without precedent. For all its privacy blunders, Facebook does own the most popular digital messaging app in most countries around the world — an app it opts to keep end-to-end encrypted by default (so far, anyway).

As far as WhatsApp goes, Cardozo’s hiring comes at a critical time: Last week, The New York Times reported Facebook’s intention to integrate WhatsApp, Instagram and Facebook Messenger. The massive change has some security and privacy-minded people happy (more end-to-end encryption!) and plenty more worried about what else the integration will mean.

Leading into the change, if it materializes, Facebook would be smart to hire as many prominent voices in online privacy as it can attract. Public criticism of the company hasn’t waned exactly, but hiring critics is a straightforward way to build trust in the meantime. For a company not known for public dissent and open dialogue, Facebook’s critics may prove a valuable asset if they can be recruited for a tour of duty behind the big blue line.

Update: Cardozo isn’t alone in making the switch from privacy advocacy to Facebook. The company has also hired Robyn Greene from the Open Technology Institute. As she announced in a tweet, Greene will focus on law enforcement access and data protection in her new role with Facebook.